id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
29,200,370
https://twitter.com/TubeTimeUS/status/1458975488018243589
x.com
null
null
true
true
false
null
2024-10-13 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
10,330,997
https://www.softprodigy.com/conversion-from-flashflex-to-html5/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,159,403
http://www.bbc.co.uk/news/world-us-canada-21311866
Etch A Sketch inventor Andre Cassagnes dies at 86
null
# Etch A Sketch inventor Andre Cassagnes dies at 86 - Published **The inventor of the classic toy Etch A Sketch has died at the age of 86.** Andre Cassagnes died in Paris on 16 January, the Ohio Art Company, the US-based firm that made the toy, said. Mr Cassagnes came up with the idea for a mechanical toy that creates erasable drawings by twisting two dials in the late 1950s, while working as an electrical technician. Picked by the Ohio Art Company at a toy fair in 1959, Etch A Sketch went on to sell more than 100 million copies. Etch A Sketch, with its familiar red-frame, grey screen and two white dials, allows children to draw something and shake it away to start again. ## Kites Mr Cassagnes saw the potential for the toy when he noticed, while working with metal powders, that marks in a coating of aluminium powder could be seen from the other side of a translucent plate. The Ohio Art Company spotted the invention at the Nuremberg Toy Fair in 1959, and the next year it became the top-selling toy in the United States. "Etch A Sketch has brought much success to the Ohio Art Company, and we will be eternally grateful to Andre for that," the firm's president Larry Killgallon said. "His invention brought joy to so many over such a long period of time." The toy may seem old-fashioned in an age of tablet computers, but the Ohio Art Company says it still has a steady market, thanks in no small part to its appearance in the Toy Story movies. And it became a feature of last year's US presidential campaign, when an aide to Republican candidate Mitt Romney likened his campaign to the toy. "You can kind of shake it up and restart all over again," said campaign spokesman Eric Fehrnstrom, a comment seized upon by his rivals as evidence that Mr Romney was willing to change his position to get elected. Etch A Sketch has been named by the American Toy Industry Association as one of the most memorable toys of the 20th century. As well as being the man behind Etch A Sketch, Andre Cassagnes also developed a reputation as the most successful designer of competition kites in France during the 1980s. - Published19 May 2011
true
true
true
The inventor of the classic toy Etch A Sketch, former electrical technician Andre Cassagnes, dies in France at the age of 86.
2024-10-13 00:00:00
2013-02-03 00:00:00
https://ichef.bbci.co.uk…asketchgetty.jpg
article
bbc.com
BBC News
null
null
594,294
http://venturebeat.com/2009/05/05/apples-filemaker-division-launches-its-bento-personal-assistant-iphone-app/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,118,639
http://danielvelkov.blogspot.com/2014/07/global-distribution-of-startup-funding_9.html
Global Distribution of Startup Funding
Daniel Velkov
I got my hands on a dataset from Crunchbase which contains a rich set of data about startups. Initially I looked at visualizing the time related trends of total funding and funding per industry. Those ended up not being very engaging (everything is trending up). That's why I moved my focus to the geographical distribution and how to best represent the growth in funding in different regions. After some experimentation I arrived at the animation above. Each frame shows the funding activity over a 2-year window which ends at the month in the title. It uses a log scale with red showing the highest values, as you can see in the legend on the right. The scale is fixed and was calculated based on the largest 2-year window. The values on the map are generated by aggregating the funding per city and passing it through a gaussian filter in order to get smoother contours for the heatmap. (You can find a map, based on the raw data without smoothing, near the end of this post). As expected we see that right now (2014) Sillicon Valley is the hottest region. Some other hotspots are New York, London, Israel, Moscow, Beijing. Looking at the bigger picture, USA is clearly leading, followed by West Europe and then East Asia. There are also a lot of small spots on the map but remember that the scale is logarithmic. That means that the small spots have 1/1000 (or even smaller fraction) of the funding that the red areas are getting. In the process I also found out that Crunchbase publishes a somewhat similar graph showing monthly activity. The difference is that they give a static view of only the last month but you can actually hover over the circles to see which companies correspond to them. The rest of this post explains how the map was generated and includes all the Python code that was used. Feel free to experiment with it. The visualization was generated using Crunchbase's database of startup activity including funding rounds, acquisitions, investors and so on. It is updated monthly and is available at http://info.crunchbase.com/about/crunchbase-data-exports/. The dataset comes as an Excel file which contains several sheets: ``` import pandas as pd import numpy as np import scipy.ndimage import datetime crunchbase = pd.ExcelFile('crunchbase_monthly_export.xlsx') crunchbase.sheet_names ``` For this analysis we are going to focus on the Rounds sheet which contains data about companies and the rounds of funding they have taken: ``` rounds = crunchbase.parse('Rounds') rounds.head() ``` Let's take a look at the summary of the funding rounds sizes: ``` rounds.raised_amount_usd.describe(percentiles=[0.01,0.5,0.99]) ``` Notice the large gap between the 99th percentile and the max value. This prompted me to look at the biggest rounds and look for potential outliers: ``` biggest_rounds = rounds.dropna(subset=['raised_amount_usd']).sort('raised_amount_usd') biggest_rounds.tail(20).ix[:,['company_name','funding_round_type','raised_amount_usd']] ``` The first few look alright but towards the end we see companies that I wouldn't really consider startups. There doesn't seem to be a clear cutoff point. I looked at those companies in Crunchabse and made the subjective decision to exclude the last 6 rows from the list. ``` blacklist_companies = biggest_rounds.tail(6).company_permalink.values city_rounds = pd.DataFrame(rounds[~rounds.company_permalink.isin(blacklist_companies)]) ``` The next step in order to get the geographical distribution is to calculate the city level funding sizes: ``` city_rounds = city_rounds.groupby(['company_country_code', 'company_city']).raised_amount_usd.sum() city_rounds.head() ``` Pandas makes the grouping columns the index of the resulting table. For convenience I'll reset the index and bring them back as regular columns ``` city_rounds = city_rounds.reset_index() city_rounds.head() ``` Let's normalize the city name from unicode to ascii. ``` import unicodedata city_rounds.company_city = city_rounds.company_city.\ map(lambda s: unicodedata.normalize('NFKD', unicode(s)).encode('ascii','ignore')) city_rounds.head() ``` In order to get the geographical distribution we'll need the locations (lat, lon) of the cities. One place which has this data is http://www.geonames.org/export/. Surprisingly it is freely available and comes with good documentaion. The biggest dataset contains 150K cities and claims to include every city with a population bigger than 1000 people. ``` import csv geo = pd.read_table('cities15000.txt', sep='\t', header=None, quoting=csv.QUOTE_NONE) geo.columns = ['geonameid','city','asciiname','alternatenames','latitude','longitude','featureclass', 'featurecode','countrycode','cc2','admin1code','admin2code','admin3code','admin4code', 'population','elevation','dem','timezone','modificationdate',] geo.head() ``` Despite having a column called 'asciiname' it turns out that it contains some non-ascii symbols. Instead we are going to use the 'city' column. ``` geo.city = geo.city.\ map(lambda s: unicodedata.normalize('NFKD', unicode(s, 'utf-8')).encode('ascii','ignore')) geo.head() ``` The next step is to join the city rounds table with the city geo locations data. We would want to join based on the country and city columns. One small detail is that the country code in the 2 tables are different, one uses 2 letter codes, the other has 3 letters. That's why I'm bringing a third table which maps between those two formats: ``` country_codes = pd.read_csv('wikipedia-iso-country-codes.csv') country_codes.rename(columns={'English short name lower case': 'country', 'Alpha-2 code': 'countrycode2', 'Alpha-3 code': 'countrycode3'}, inplace=True) country_codes.head() ``` ``` geo = pd.merge(geo, country_codes, left_on='countrycode', right_on='countrycode2') geo.head() ``` Next let's check what will happen if we do the join based on the city name: ``` roundset = set(city_rounds.company_city.unique()) geoset = set(geo.city.unique()) print len(roundset), len(roundset - geoset) ``` More than 2000 cities wouldn't match. Here are some examples: ``` list(roundset - geoset)[1000:1010] ``` Some of those are important cities which we wouldn't want to lose in the process. To fix this issue I'll use a Python library for fuzzy string matching with the cute name 'fuzzywuzzy': ``` from fuzzywuzzy import process print process.extract('Washington, D. C.', geoset) ``` The result is the top 5 matches and their similarity score. In this example the top match seems to be the correct one. The algorithm goes through the rows of `city_rounds` in order and tries to match against the cities from the `geo` table. To speed up the string matching, we are going to group the `geo` table by country and only look for matches which share the same country code as the query city. I've also set a score threshold of 75 in order to exclude low-confidence matches. ``` geo_grouped = geo.groupby('countrycode3') def find_match(i, city_rounds=city_rounds): '''Find the best match for the city in row i of city_rounds searching in cities from geo_grouped Returns a tuple of ((country, city), best_match) if a high quality match is found otherwise returns None''' country = city_rounds.ix[i,'company_country_code'] if country in geo_grouped.indices: cities = geo_grouped.city.get_group(country) best_match = process.extractOne(city_rounds.ix[i,'company_city'], cities) if best_match and best_match[1] > 75: return ((country, city_rounds.ix[i,'company_city']), best_match[0]) ``` To further improve the speed of the matching step we are going to run the `find_match` function in parallel. Be warned that it still takes a while to finish (~15 minutes on my laptop): ``` from joblib import Parallel, delayed results = Parallel(n_jobs=-1)(delayed(find_match)(i) for i in range(len(city_rounds))) ``` We end up with a list of tuples and `None` s from which we filter out the `None` values. Convert the list to a dictionary which will be used to match the (country, city) to a city from the geo table. We also fix 'New York' which happens to match equally well with both 'New York City' and 'York' (and arbitralily picks the second one). ``` replace = dict(filter(bool, results)) override = { ('USA', 'New York'): 'New York City', } replace.update(override) ``` Now we can add an extra column to `city_rounds` and populate it with the best matches: ``` city_rounds['city_match'] = [np.nan]*len(city_rounds) for i in range(len(city_rounds)): t = tuple(city_rounds.ix[i,['company_country_code','company_city']].values) city_rounds.ix[i,'city_match'] = replace.get(t, np.nan) ``` We are ready to join the `city_rounds` and `geo` tables: ``` merged = pd.merge(geo, city_rounds, left_on=['countrycode3', 'city'], right_on=['company_country_code', 'city_match']) merged.head() ``` And the cities with the largest sums of funding rounds are: ``` merged[merged.raised_amount_usd.notnull()].\ ix[:,['company_city','city_match','countrycode3','latitude','longitude','population','raised_amount_usd']].\ sort(columns='raised_amount_usd').\ tail(n=10) ``` Another interesting statistic is the top cities with the largest median round sizes (among cities with at least 50 funding rounds): ``` grouped = rounds.groupby(['company_country_code', 'company_city']).raised_amount_usd grouped.median()[grouped.count() > 50].reset_index().dropna(subset=['raised_amount_usd']).sort('raised_amount_usd').tail(10) ``` Now it's finally time to map our data. ``` import mpl_toolkits.basemap as bm import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import cm %matplotlib inline ``` To map the geographical distribution of the funding rounds, I'll split the map in a 600x300 grid. The value for each cell of the grid comes from summing the values for all cities which fall within that cell. That's what the next function is calculating. ``` def geo_distribution(table): world = bm.Basemap(resolution='l',projection='merc', area_thresh=10000, llcrnrlon=-160, llcrnrlat=-50, urcrnrlon=180, urcrnrlat=70, ellps='WGS84') N = 300 lons, lats = world.makegrid(2*N, N) # get lat/lons of 2N by N evenly spaced grid. x, y = world(lons, lats) # compute map proj coordinates. data = np.zeros((N,2*N)) for r in table[table.raised_amount_usd.notnull()].iterrows(): xx = np.searchsorted(lons[0], r[1].longitude) yy = np.searchsorted(lats[:,0], r[1].latitude) data[yy,xx] += r[1].raised_amount_usd return x, y, data x, y, data = geo_distribution(merged) ``` Plotting a histogram of the result shows that the values are extremely left-skewed. We are going to plot those values colored by their magnitude and with the current distribution we sould be seeing mostly one color which is not very interesting. ``` plt.hist(data[data>0].flatten()) ``` To counter that we'll just work in a log scale which gives us a better looking distribution: ``` log_data = np.log10(data + 1) plt.hist(log_data[log_data>0].flatten()) ``` Let's go ahead and create the plot: ``` fig = plt.figure(figsize=(20, 12)) ax = fig.add_axes([0.0, 0.0, 0.95, 1.0]) plt.title('Total startup funding (1999-2014)', fontsize=24) # Initialize the map and configure the style world = bm.Basemap(resolution='l',projection='merc', area_thresh=10000, llcrnrlon=-160, llcrnrlat=-50, urcrnrlon=180, urcrnrlat=70, ellps='WGS84', ax=ax) world.drawcoastlines(linewidth=0.1) world.drawcountries(linewidth=0.1) world.drawlsmask(land_color='#F4F3F2', ocean_color='#BFE2FF') x, y, data = geo_distribution(merged) log_data = np.log10(data + 1) # Calculate the range for the contour values min_level = np.percentile(log_data[log_data>1].flatten(), 40) max_level = log_data.max() clevs = np.linspace(min_level, max_level) cmap = cm.Spectral_r cs = world.contourf(x, y, log_data, clevs, cmap=cmap) # Plot the colorbar with values matching the contours ax1 = fig.add_axes([0.97, 0.2, 0.03, 0.6]) norm = mpl.colors.LogNorm(vmin=10**min_level, vmax=10**max_level) cb = mpl.colorbar.ColorbarBase(ax1, cmap=cmap, norm=norm, orientation='vertical') cb.set_ticks([1e5, 1e6, 1e7, 1e8, 1e9, 1e10]) cb.set_ticklabels(['$100K', '$1M', '$10M', '$100M', '$1B', '$10B']) ``` It looks a bit sparse, right? The problem is that the density, of the cities with startup activity, is not high enough. The approach I took was to apply a gaussian filter to the values in order to smoothen the points and distribute some of their weight to their neighbouring points. Here's the result: ``` fig = plt.figure(figsize=(20, 12)) ax = fig.add_axes([0.0, 0.0, 0.95, 1.0]) plt.title('Total startup funding (1999-2014)', fontsize=24) # Initialize the map and configure the style world = bm.Basemap(resolution='l',projection='merc', area_thresh=10000, llcrnrlon=-160, llcrnrlat=-50, urcrnrlon=180, urcrnrlat=70, ellps='WGS84', ax=ax) world.drawcoastlines(linewidth=0.1) world.drawcountries(linewidth=0.1) world.drawlsmask(land_color='#F4F3F2', ocean_color='#BFE2FF') x, y, data = geo_distribution(merged) log_data = np.log10(scipy.ndimage.filters.gaussian_filter(data, 0.6) + 1) min_level = np.percentile(log_data[log_data>1].flatten(), 40) max_level = log_data.max() clevs = np.linspace(min_level, max_level) cmap = cm.Spectral_r cs = world.contourf(x, y, log_data, clevs, cmap=cmap) ax1 = fig.add_axes([0.97, 0.2, 0.03, 0.6]) norm = mpl.colors.LogNorm(vmin=10**min_level, vmax=10**max_level) cb = mpl.colorbar.ColorbarBase(ax1, cmap=cmap, norm=norm, orientation='vertical') cb.set_ticks([1e5, 1e6, 1e7, 1e8, 1e9, 1e10]) cb.set_ticklabels(['$100K', '$1M', '$10M', '$100M', '$1B', '$10B']) ``` Finally we can generate the animation from the beginning of this post. This is done by outputting a frame for each 2-year window using very similar code to that one used for the map above. First we need some logic to slice the `city_rounds` table in windows defined by (begin, end) dates: ``` rounds2 = rounds[rounds.funded_at.map(lambda d: type(d)==pd.datetime)] def merged_in_range(begin, end): city_rounds2 = pd.DataFrame(rounds2[(rounds2.funded_at>=begin) & (rounds2.funded_at<=end) &\ (~rounds2.company_permalink.isin(blacklist_companies))].\ groupby(['company_country_code', 'company_city']).raised_amount_usd.sum()).reset_index() city_rounds2.company_city = city_rounds2.company_city.\ map(lambda s: unicodedata.normalize('NFKD', unicode(s)).encode('ascii','ignore')) merged2 = pd.merge(merged, city_rounds2, how='left', on=['company_country_code', 'company_city'], suffixes=('_total', '')) return merged2 ``` ``` # find the last window in order to compute the levels for the colorscale date = np.datetime64(datetime.datetime(2014, 6, 30)) time_window = np.timedelta64(2*365, 'D') begin = pd.to_datetime(date - time_window) end = pd.to_datetime(date) merged2 = merged_in_range(begin, end) x, y, data = geo_distribution(merged2) log_data = np.log10(scipy.ndimage.filters.gaussian_filter(data, 0.6) + 1) min_level = np.percentile(log_data[log_data>1].flatten(), 40) max_level = log_data.max() clevs = np.linspace(min_level, max_level) norm = mpl.colors.LogNorm(vmin=10**min_level, vmax=10**max_level) def plot(date): begin = pd.to_datetime(date - time_window) end = pd.to_datetime(date) merged2 = merged_in_range(begin, end) fig = plt.figure(figsize=(20, 12)) ax = fig.add_axes([0.05, 0.05, 0.87, 0.9]) plt.title('Startup funding %d/%02d-%d/%02d' % (begin.year, begin.month, end.year, end.month), fontsize=20, family='monospace') # Initialize the map and configure the style world = bm.Basemap(resolution='l',projection='merc', area_thresh=10000, llcrnrlon=-160, llcrnrlat=-50, urcrnrlon=180, urcrnrlat=70, ellps='WGS84', ax=ax) world.drawcoastlines(linewidth=0.1) world.drawcountries(linewidth=0.1) world.drawlsmask(land_color='#F4F3F2', ocean_color='#BFE2FF') x, y, data = geo_distribution(merged2) log_data = np.log10(scipy.ndimage.filters.gaussian_filter(data, 0.6) + 1) cs = world.contourf(x, y, log_data, levels=clevs, cmap=cmap) ax1 = fig.add_axes([0.94, 0.2, 0.03, 0.6]) cb = mpl.colorbar.ColorbarBase(ax1, cmap=cmap, norm=norm, orientation='vertical') cb.set_ticks([1e5, 1e6, 1e7, 1e8, 1e9, 1e10]) cb.set_ticklabels(['$100K', '$1M', '$10M', '$100M', '$1B', '$10B']) plt.savefig('%s.png' % str(date)[:7]) rng = pd.date_range('1/1/1999','6/30/2014',freq='1M') _ = Parallel(n_jobs=-1)(delayed(plot)(date) for date in rng) ``` Once you have generated the separate frames, you can use imagemagick, or any other image processing software, to stitch them together in an animated gif. ## No comments: ## Post a Comment
true
true
true
crunchbase Full size animation I got my hands on a dataset from Crunchbase whi...
2024-10-13 00:00:00
2014-07-31 00:00:00
https://blogger.googleus…-nu/download.png
null
blogspot.com
danielvelkov.blogspot.com
null
null
4,298,042
https://www.zupstream.com
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,508,698
https://www.medianama.com/2018/11/223-why-we-included-the-right-to-delete-personal-information-in-sqrrl-aditya-sahay/
Why we included the right to delete personal information in Sqrrl: Aditya Sahay - MEDIANAMA
Guest Author
Sqrrl, a mutual fund advisory app for India, was built for getting the first time investors to get started with investing in mutual funds. Like most early stage products, for most of our initial versions, we ended our relationship with the customer when she uninstalled the app. This was, however, inadequate. Several customers did not want to stop at that, and requested us to delete their personal information. Since the past few months, we have added a switch for our customer success team to delete all account information whenever a customer requests it. We believe this is a significant step towards giving customers control over their personal data, as well as a proactive move before provisions of the upcoming data privacy bill become law. # What data we collect and Why We’re a private limited company, registered as a Registered Investment Advisor (RIA) and regulated by SEBI. Since investing in the securities market is a much more of a “serious” activity than, say, purchasing groceries or listening to music on an app, we need to be very careful and strict about the data we collect and store. We collect the following data: ## Investor Data Any investor investing through us needs to share important personal information: - Name, address and contact - PAN details and KYC (Know Your Customer) documents like ID and address proofs - At least one savings bank account details - Additional disclosures (e.g., residency and tax status, nominee information) Finally, once a customer makes an actual transaction in a mutual fund, all details of the transaction – from the initial payment gateway request to the final settlement in the mutual fund account, are also maintained by us for operational and compliance reasons. ## App Data Running any transactional service using an app means user data makes it to not just our own database, but several tools which are essential to running an online service: - Analytics - Communications (sms, email, push) – both promotional and transactional - Advertising and marketing platforms (however data is rarely personally identifiable) - Customer support systems - Email, chat and any communication tools, both internal and user-facing. - All sorts of low level logs generated by code, network requests and so on. - Any inputs provided to the app – like investment goals, user avatar and so on. All these services are crucial to running a successful online service, yet they add complexity when trying to delete the app data across third-parties. # Our Approach to “Right to Delete” As Uncle Ben told Peter (in Spiderman), with great power comes great responsibility. There are broadly two kinds of accounts on Sqrrl – the “explorers” and the actual transacting customers. Before a customer actually completes a transaction, she is simply exploring the app’s features and making up her mind whether to proceed or not. If the customer hasn’t even shared KYC details with us, it is fairly straightforward to delete her account. In case of customers who complete entire setup (including KYC) and decide not to purchase, while we are happy to delete app data, we keep an archive of KYC data for regulatory audits. In case of customers who make a transaction, we are required by law to maintain data for seven years – a requirement for all financial services firms. Similar requirements exist all over the world. If a customer, say Nirav M., simply stops using an app, that doesn’t mean his transaction history would be deleted and therefore be unavailable for any future scrutiny by tax or other authorities. It is in these cases we have the hardest time explaining to customers why we are unable to delete their data. These legal requirements supercede any data privacy expectations, and as a regulated company we would very much like not to go to jail or pay enormous fines. ## Current Implementation In our current implementation, there is a “switch” available to our customer success team that acts upon requests to delete data. The switch deletes live records, as well as disables communication on third-party platforms for the given accounts. In cases where deletion is not possible, our team explains this to the customer as best as possible. ## The Road Ahead The next phase would try to remove (as far as possible) customer from associated tools (e.g., analytics platform). It may be sufficient to simply overwrite any personally identifiable data with junk values so that the aggregate data remains (which is useful from analytics perspective) while nobody knows who exactly the user is. This needs most work and will need support from respective platforms, hence left for the future. ## An alternative approach Where deletion of transaction data is not possible, we are exploring a way to archive the data in a separate system where it is available to authorised personnel if needed, but otherwise stored away from live customer data. This is a good compromise and is something we are currently exploring. # Conclusion The “Right to Delete” is one of the several measures that gives customers control over the use of their own data by a product or service that they use. An explicit deletion (or archival) of customer data is a great way to drastically reduce chance of misuse of data – something that unfortunately we have taken to be a way of life. All businesses must clearly map out where all user data flows, and what relationships are in place for safety (and eventually, deletion or overwriting) of this data. For online businesses, especially transactional, this tends to be quite a challenge. Multiple technical approaches exist for removal and archival of data, one must decide based on how regulated the industry is. Sqrrl has only now gotten started. We hope with Data Privacy Bill several gaps in our understanding will get plugged. * **About the Author: **Aditya Sahay, Head of Engineering, Sqrrl Fintech. *Aditya leads Product and Engineering for Sqrrl Fintech, a commission-free mutual fund advisory app driving young Indians to take control of their financial wellness by investing in personalised plans in-line with their life goals, in a language of their choice.*
true
true
true
Sqrrl, a mutual fund advisory app for India, was built for getting the first time investors to get started with…
2024-10-13 00:00:00
2018-11-21 00:00:00
https://i0.wp.com/www.me…1280%2C960&ssl=1
article
medianama.com
MEDIANAMA
null
null
25,574,475
https://www.engadget.com/james-scotty-doohan-star-trek-ashes-aboard-the-iss-093707923.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,672,140
https://www.wsj.com/articles/facebook-staff-fret-over-chinas-ads-portraying-happy-muslims-in-xinjiang-11617366096
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,100,442
http://books.google.com/books?id=K5w9AAAAYAAJ&printsec=frontcover#v=onepage&q&f=false
The Domestic Guide in Cases of Insanity
null
Search Images Maps Play YouTube News Gmail Drive More » Sign in Books Try the new Google Books Check out the new look and enjoy easier access to your favorite features Try it now No thanks Try the new Google Books Try the new Google Books My library Help Advanced Book Search Download EPUB Download PDF Plain text Read eBook Get this book in print AbeBooks On Demand Books Amazon Find in a library All sellers » The Domestic Guide in Cases of Insanity: Pointing Out the Causes ..., Issue 54 By Thomas Bakewell About this book Terms of Service Plain text PDF EPUB
true
true
true
null
2024-10-13 00:00:00
null
https://books.google.com/books/content?id=K5w9AAAAYAAJ&printsec=frontcover&img=1&zoom=1&edge=curl&imgtk=AFLRE72gkJmkqMDsyfeRi-Iew9jj5vBm3adtN4T0RQ51pphpNgy80YiOsHcSEr7LnQt-6jn89HT8Jfq4-yFKHA80wDXBk6a573qIHAXKX3eb0BkfqwizFCGifdtfzr_P-Jo6uw99gx96
book
google.com
Google Books
null
null
6,360,222
http://www.lispcast.com/why-technical-explanation-alone-is-not-enough
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,113,488
https://slack.design/articles/how-our-biggest-redesign-yet-came-to-be/
How our biggest redesign yet came to be • Slack Design
Lcarmen2
The timing was right. We were adding in new features like huddles, canvases, lists, and others into a UI system that was originally designed solely for messaging capabilities. Meanwhile, research showed users on the biggest and most active teams were struggling to stay on top of the basics. Our product navigation was reaching its limits. Slack had more to offer, but it was harder to find it. A new paradigm that properly showcased our expanding offerings was needed. However, redesigns usually fall into the “high effort, high variance, questionable benefit” bucket that businesses try to avoid at all costs. Unless something is absolutely broken, not many companies are willing to invest. Here’s how we harnessed the power of design to lead the way for this ambitious undertaking. **Prototype the Path** We carved out time and space for our design team to think boldly and explore new ideas without constraints. Here at Slack, one of our product principles is to “prototype the path.” We know one of the biggest superpowers our designers have is the ability to visually show how our product *could *look and more importantly *feel* in order to create alignment with cross-functional partners and initiate large projects. The design team developed a series of highly provocative prototypes, not aimed at providing final solutions but at sparking discussions with users and internal stakeholders. The prototypes served as guides for our explorations. We initiated discussions early on with key engineering and product leaders to ensure the feasibility of our redesign. Working closely with them allowed us to discard impractical ideas and focus on valuable solutions. While these prototypes were not perfect, they paved the way to realistically address our challenges. As we continued to share these prototypes, key leaders recognized the potential and eagerly joined forces to create a meaningful solution. With buy-in from these key stakeholders, we were ready to get the ball rolling. **Operationalize transparency to build alignment** We kicked off the project in earnest with a three-day in-person onsite for design, engineering, product and program leads. With the prototype as our guide, we started to sketch out the path to production. Which teams would need to be involved? How quickly could we move? When might we start bringing customers into the process? Which questions were most important to answer first? We organized our teams following a hub-and-spoke model. The hub focused on driving strategy and coordination across different teams, while the spokes concentrated on specific product areas, enabling autonomous progress. To work transparently and efficiently, we invested significant time in setting up the right operations and ensuring coordination with our internal stakeholders—which encompassed virtually every team in the company—and our customers too. We established communication channels, feedback workflows, and allocated time for regular conversations with internal users, pilot customers, and contributing teams. **Create space for internal and external feedback** As we moved to prototype in the actual product, our continued dialogue with customers was key to validate our initial assumptions, and recalibrate our efforts. For instance, an initial design that aimed to simplify the interface further, placed search in the left-hand navigation. However, user research, pilot feedback, and internal usage data showed us that search just wasn’t findable in that new location, so we moved it back to the top. It wasn’t and won’t be the last thing we had to revisit (See more in: A focus more productive Slack). At every step we needed to ensure our solutions were aligned with their expectations and to date, we’re still iterating. We set up ongoing research workstreams to answer small design and product questions and ran regular cross-functional workshops to untangle meaty challenges. Pulse surveys for both internal and external users, alongside usage data helped us build confidence before launching. The surveys were especially helpful at pointing out pain points and helping us to course correct before launching to wider audiences. Staying on course can be a challenge when feedback starts to challenge the solutions you’re building. You have to remember you won’t satisfy everyone (or even be able to address every opinion or suggestion). It is in these moments that the importance of strong leadership and a deep commitment to the product’s core vision becomes key. Conviction is the anchor that will help navigate the turbulent waters of feedback, enabling adaptation and evolution without losing sight of the ultimate goal: creating a solution that truly serves the users. **Considerations for a successful redesign ** **Start bold, but be pragmatic. **Start with high-level solutions to facilitate conversations to create common understanding. Get a sense of their validity through users and internal stakeholders as quickly as you can. **Get everyone onboard.** Big changes are hard, more so when they’re not part of people’s plans. Ensure everyone takes an active role in the process bringing their expertise and unique perspectives. **Rigorously execute. **Communicate often, provide updates regularly, monitor team’s pace and sentiment and be always available to resolve any roadblocks. **Work hand in hand with customers. **Involving them in the redesign process ensures they’ll have enough time to process and prepare for the changes. Even if these are better, no one wants a sudden change the day they least expect it. **Move forward with conviction.** Once you have a good signal, get organizational alignment and provide strong leadership. The journey will get scary at times and this will help push through in the hardest moments. **Commit to craft. **You’ve already made a big commitment so make sure you spend as much time polishing the edges. Details matter. And remember: it takes a village… and the right timing!
true
true
true
The timing was right. We were adding in new features like huddles, canvases, lists, and others into a UI system that was originally designed solely for messaging capabilities. Meanwhile, research showed users on the biggest and most active teams were struggling to stay on top of the basics. Our product navigation was reaching its limits. …
2024-10-13 00:00:00
2023-10-25 00:00:00
https://slack.design/wp-…[email protected]
article
slack.design
Slack Design
null
null
19,365,324
https://github.com/solid/solid
GitHub - solid/solid: Solid - Re-decentralizing the web (project directory)
Solid
Solid Re-decentralizing the Web Solid is a proposed set of standards and tools for building decentralized Web applications based on Linked Data principles. Read more on solidproject.org.
true
true
true
Solid - Re-decentralizing the web (project directory) - solid/solid
2024-10-13 00:00:00
2015-11-24 00:00:00
https://opengraph.githubassets.com/c9f8125d4756f76078915b953e7ef390dae2a4ed50bdd9122704c44c39c2f25d/solid/solid
object
github.com
GitHub
null
null
1,054,975
http://www.nytimes.com/2010/01/16/technology/internet/16vpn.html?adxnnl=1&ref=technology&adxnnlx=1263567683-f%20dknpSMPjEvl44zKbDGxg&pagewanted=all
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,638,093
https://old.reddit.com/r/amiga/comments/zrfnm3/turrican_2_aga_released/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
36,595,812
https://blog.computerra.de/2023/02/04/15-facts-i-have-learned-after-15-years-of-software-development/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,965,746
https://edwardsnowden.substack.com/p/ns-oh-god-how-is-this-legal
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
22,062,983
https://www.khronos.org/news/press/khronos-group-releases-vulkan-1.2
Khronos Group Releases Vulkan 1.2
null
# Press Release ## Khronos Group Releases Vulkan 1.2 **Proven API extensions integrated into new Vulkan core specification for improved GPU acceleration functionality and performance.** **Beaverton, OR – January 15, 2020 – 6:00 AM PT – **Today, The Khronos® Group, an open consortium of industry-leading companies creating advanced interoperability standards, announces the release of the Vulkan® 1.2 specification for GPU acceleration. This release integrates 23 proven extensions into the core Vulkan API, bringing significant developer-requested access to new hardware functionality, improved application performance, and enhanced API usability. Multiple GPU vendors have certified conformant implementations, and significant open source tooling is expected during January 2020. Vulkan continues to evolve by listening to developer needs, shipping new functionality as extensions, and then consolidating extensions that receive positive developer feedback into a unified core API specification. Carefully selected API features are made optional to enable market-focused implementations. Many Vulkan 1.2 features were requested by developers to meet critical needs in their engines and applications, including: timeline semaphores for easily managed synchronization; a formal memory model to precisely define the semantics of synchronization and memory operations in different threads; descriptor indexing to enable reuse of descriptor layouts by multiple shaders; deeper support for shaders written in HLSL, and more. “Vulkan 1.2 brings together nearly two dozen high-priority features developed over the past two years into one, unified core Vulkan standard, setting a cutting-edge bar for functionality in the industry’s only open GPU API for cross-platform 3D and compute acceleration,” said **Tom Olson, distinguished engineer at Arm, and Vulkan working group chair**. “Khronos will continue delivering regular Vulkan ecosystem updates with this proven, developer-focused methodology to both meet the needs and expand the horizons of real-world applications.” Khronos and the Vulkan community will support Vulkan 1.2 in a wide range of open source compilers, tools, and debuggers by the end of January 2020. This includes the RenderDoc frame capture and debugging tool, the Vulkan conformance test suite, and the Vulkan SDK with support for both the ‘GPU Assisted’ and ‘Best Practices’ validation layers. All GPUs that support previous versions of Vulkan are capable of supporting Vulkan 1.2, ensuring its widespread availability. As of today, five GPU vendors have Vulkan 1.2 implementations passing the Khronos conformance tests: AMD, Arm, Imagination Technologies, Intel, NVIDIA, plus the open-source Mesa RADV driver for AMD. Driver release updates will be posted on the Vulkan Public Release Tracker along with the status of other Vulkan ecosystem components. Vulkan is an open, royalty-free API for high-efficiency, cross-platform access to modern GPUs, with widespread adoption in leading engines, cutting-edge games, and demanding applications. Vulkan is supported in a diverse range of devices from Windows and Linux PCs, consoles, and the cloud, to mobile phones and embedded platforms, including the addition of Google’s Stadia in 2019. Find more information on the Vulkan 1.2 specification and associated tests and tools at Khronos’ Vulkan Resource Page. Sample code can be found in the Vulkan Unified Samples Repository. Khronos welcomes feedback on Vulkan 1.2 from the developer community through Khronos Developer Slack and GitHub. **Industry Support for Vulkan 1.2** “AMD is excited to provide support for the Vulkan 1.2 specification in our upcoming Vulkan 1.2 supported driver for a broad range of AMD graphics hardware, including the AMD Radeon™ RX 5700 Series and AMD Radeon™ RX 5500 Series. Vulkan 1.2 brings many new features, including Dynamic Descriptor Indexing and finer type support for 16-bit and 8-bit types – and are designed to enable developers to better take advantage of modern GPU features and deliver richer graphics experiences to end users. We look forward to continued adoption of the Vulkan API and the new graphics experiences possible with the latest Vulkan 1.2 feature set,” said **Andrej Zdravkovic, corporate vice president, Software Development, AMD**. “The new iteration of Vulkan API highlights the ongoing innovation the Khronos group continues to drive in the high-performance graphics space. Arm is already offering conformant Vulkan 1.2 implementations for the Bifrost and Valhall architectures of our Mali GPU, and we will continue to deliver optimized tools and technologies that make performance more accessible for developers designing for the next generation of immersive experiences,” said **Pablo Fraile, director of developer ecosystems, client line of business, Arm.** “Stadia is thrilled to see the long-awaited features in Vulkan 1.2. Not only are they a game changer for Stadia but for the Vulkan ecosystem as a whole. Vulkan 1.2 brings remarkable improvements for HLSL support in Vulkan and the increased flexibility and performance gains will enable developers to take greater advantage of the GPU than ever before. Stadia can’t wait to see how developers leverage the new timeline semaphore, descriptor indexing, and finer type subgroup operations in graphics and compute for their next generation titles,” said **Hai Nguyen, staff technical solutions engineer, Google Stadia.** “Imagination welcomes the launch of Vulkan 1.2. It’s a great update and will really benefit developers. Our latest GPU architecture – IMG A-Series – will fully support Vulkan 1.2 and will help developers achieve the best performance and power savings. Our best-in-class tools, such as PVRTune and PVRCarbon, are designed with Vulkan in mind, giving developers detailed information of profiling and debugging,” said **Mark Butler, vice president of software engineering, Imagination Technologies**. “Intel is delighted by the release of Vulkan 1.2 and looks forward to seeing developers take advantage of it to deliver even richer visual computing experiences,” said **Lisa Pearce, vice president, Intel Architecture, graphics and software, and director of the visual technologies team.** “With the broadest installed base of PC graphics processors capable of supporting Vulkan 1.2, and with products based on our breakthrough Xe architecture coming shortly, we’re excited to play a key role in enabling next-generation visual computing experiences for millions of users.” “NVIDIA’s Vulkan 1.2 drivers are available today with full functionality for both Windows and Linux,” said **Dwight Diercks, senior vice president of software engineering, NVIDIA**. “With Vulkan enabling mission-critical applications on NVIDIA GPUs across desktop, embedded and cloud platforms, we’re driving innovative functionality to fuel the growing momentum of this key open standard.” "We are very excited about the new capabilities in Vulkan 1.2. The VMA and scheduling features allow us to implement next-generation graphical and computing solutions across a wide array of hardware for our Cider game engine," said **Brad Wardell, CEO of Stardock Entertainment.** **Resources: ** - More information on Vulkan - All Khronos open source projects are available on GitHub - A tutorial on Vulkan Timeline Semaphore - Updates on HLSL support in Vulkan **About The Khronos Group ** The Khronos Group is an open, non-profit, member-driven consortium of over 150 industry-leading companies creating advanced, royalty-free, interoperability standards for 3D graphics, augmented and virtual reality, parallel programming, vision acceleration and machine learning. Khronos activities include Vulkan®, OpenGL®, OpenGL® ES, WebGL™, SPIR-V™, OpenCL™, SYCL™, OpenVX™, NNEF™, OpenXR™, 3D Commerce™ and glTF™. Khronos members drive the development and evolution of Khronos specifications and are able to accelerate the delivery of cutting-edge platforms and applications through early access to specification drafts and conformance tests. ### Khronos® and Vulkan® are registered trademarks, and ANARI™, WebGL™, glTF™, KTX™, NNEF™, OpenVX™, SPIR™, SPIR-V™, SYCL™, OpenVG™, Karamos™ and 3D Commerce™ are trademarks of The Khronos Group Inc. OpenXR™ is a trademark owned by The Khronos Group Inc. and is registered as a trademark in China, the European Union, Japan and the United Kingdom. OpenCL™ is a trademark of Apple Inc. and OpenGL® is a registered trademark and the OpenGL ES™ and OpenGL SC™ logos are trademarks of Hewlett Packard Enterprise used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.
true
true
true
The Khronos Group announces the release of the Vulkan 1.2…
2024-10-13 00:00:00
2020-01-15 00:00:00
https://www.khronos.org/…-tagline-New.png
article
khronos.org
The Khronos Group
null
null
19,156,656
https://medium.com/@datarade/musings-on-energy-from-a-pisse-9c4da0e58b3f
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,417,174
https://www.chefskiss.co.uk/
Chefs Kiss
null
Takes your instructional videos and transforms them into elegant, precise, and easily shareable written formats. All hosted on your own webpage. Paste a link or upload your cooking video, and we'll automatically extract the info to create a written recipe. We are currently operating as a free service Transform cooking videos into monetizable written recipes on your personal website. Ideal for chefs and food bloggers at all levels. $15 Squarespace for chefs Introducing our professionally designed template for your webpage! Host it easily and enjoy automatic updates when you create new recipes. Customise, edit, and delete recipes effortlessly to make them truly yours. Stay tuned for more templates coming soon! View a preview of what your site could look like here
true
true
true
Create recipes from your videos and build your own profesionally designed website
2024-10-13 00:00:00
2023-01-01 00:00:00
https://www.chefskiss.co…mages/chefs2.png
website
null
Marcusps1
null
null
31,776,020
https://www.t-mobile.com/news/offers/new-connect-by-t-mobile-plans
T‑Mobile Launches New Connect by T‑Mobile Plans ‑ T‑Mobile Newsroom
null
**BELLEVUE, Wash. — March 21, 2022 **— T-Mobile (NASDAQ: TMUS) today announced the new Connect by T-Mobile prepaid plans, including a new $10 plan. Designed for those who want no-frills wireless from multi-carrier retail channels, Connect by T-Mobile provides a low-cost option to help keep millions of families and individuals – especially those hard hit by rising inflation and economic uncertainty – connected to family, friends, work, school, and information. Connect by T-Mobile is part of the Un-carrier’s 5G for Good initiative and builds on the T-Mobile Connect plans launched in March 2020 to keep Americans connected during the pandemic. Connect by T-Mobile prepaid plans range from the super affordable $10 plan to a $35 plan for those who want more data. - $10 per month plus tax, the lowest price smartphone plan ever from the Un-carrier, that includes 1000 minutes of talk, 1000 texts and 1GB of high-speed smartphone data. - $15 per month plus tax for unlimited talk and text, plus 3GB of high-speed smartphone data. - $25 per month plus tax for unlimited talk and text, plus 6GB of high-speed smartphone data. - $35 per month plus tax, for unlimited talk, text and 12 GB of high-speed smartphone data. Connect by T-Mobile plans also include Un-carrier benefits like Scam Shield protection and free Caller ID included in the rate plan, and access to T-Mobile’s nationwide 5G network with no credit check required. Plans will be available starting Friday, March 25, 2022, online, in T-Mobile stores and in multi-carrier retailers across the US. To find out more visit https://connectbyt-mobile.com Follow T-Mobile’s Official Twitter Newsroom @TMobileNews to stay up to date with the latest company news. # # # *Connect: Plus taxes & fees. SIM starter kit may be required. Domestic use only. After allotted high-speed data is used, data unavailable until next bill cycle for Connect plans, not eligible for unlimited data or 10GB hotspot features. On all plans, video streams at 480p. Unlimited on device and on network only. 5G: Capable device required; coverage not available in some areas. Downlink only. Some uses may require certain plan or feature. See details at **t-mobile.com**.* **About T-Mobile** T-Mobile U.S. Inc. (NASDAQ: TMUS) is America’s supercharged Un-carrier, delivering an advanced 4G LTE and transformative nationwide 5G network that will offer reliable connectivity for all. T-Mobile’s customers benefit from its unmatched combination of value and quality, unwavering obsession with offering them the best possible service experience and undisputable drive for disruption that creates competition and innovation in wireless and beyond. Based in Bellevue, Wash., T-Mobile provides services through its subsidiaries and operates its flagship brands, T-Mobile, Metro by T-Mobile and Sprint. For more information please visit: https://www.t-mobile.com.
true
true
true
BELLEVUE, Wash. — March 21, 2022 — T‑Mobile (NASDAQ: TMUS) today announced the new Connect by T‑Mobile prepaid plans, including a new $10 plan. Designed
2024-10-13 00:00:00
2022-03-21 00:00:00
https://www.t-mobile.com…hero-3-18-22.png
article
t-mobile.com
T-Mobile Newsroom
null
null
1,906,375
http://wids.lids.mit.edu/
Organizers
null
### UPDATE: The workshop is now over and was a great success, with over 200 attendees from around the world. We want to thank you all for your participation and for your numerous expressions of appreciation. You can find a summary of the panel discussion here. The summary is also available as a pdf here. The final version of the program (including abstracts) is here. Social networks have a defining impact on consumer choice, financial markets, and political decisions, and network effects are central to public health, smart power grids, urban transportation, and more. Recent technological and mathematical developments have opened the possibility of dramatically improving our understanding of how social networks carry information and influence decisions. In this workshop, we bring together researchers from different communities working on information propagation and decision making in social networks to investigate both rigorous models that highlight capabilities and limitations of such networks as well as empirical and simulation studies of how people exchange information, influence each other, make decisions and develop social interactions. This workshop is organized by the virtual center Connection Science and Engineering, a multidisciplinary and interdepartmental MIT center that focuses on developing an integrated framework for the study of the connected world we live in. The center is hosted jointly by Laboratory for Information and Decision Systems (LIDS), Media Lab, and Computer Science and Artificial Intelligence Laboratory (CSAIL). # Organizers - Vincent Blondel, UCLouvain (Belgium) and LIDS, MIT - Costis Daskalakis, CSAIL, MIT - David Gamarnik, Sloan School, MIT - Asu Ozdaglar, LIDS, MIT - Alex 'Sandy' Pentland, Media Lab, MIT - Devavrat Shah, LIDS, MIT - John Tsitsiklis, LIDS, MIT # Support Staff For administrative requests and general questions about the conference, please contact Lynne or Nicole. - Lynne Dell – Administrative Support – 617-452-3679 - Nicole Freedman – Administrative Support – 617-253-3818 For problems with the website, or technical questions regarding the conference, please contact Brian. - Brian Jones – Technical Support – 617-253-4070
true
true
true
null
2024-10-13 00:00:00
2012-01-01 00:00:00
null
null
null
null
null
null
12,976,701
https://www.timeshighereducation.com/news/publishing-innovations-aim-highlight-hidden-research-efforts
Publishing innovations aim to highlight hidden research efforts
Holly Else
Like many academics, when Josh Nicholson was in the final year of his PhD in cancer biology at Virginia Polytechnic Institute and State University, he got frustrated with the process of publishing research. But unlike most academics, he decided to do something about it. “I was frustrated that publishing was so slow, expensive, closed and really ineffective at its one purpose – communicating ideas…I found the incentives in publishing to be misaligned with the aims of research,” he said. These frustrations spurred him to create The Winnower, an online open access publishing platform that uses open peer review after publication. “The idea was to publish everything, and then sort the good from bad,” he said. The platform publishes work that researchers are writing but not necessarily submitting to traditional journals and would otherwise be lost. This so-called grey literature can include datasets from research that an academic decides not to take any further, null or negative findings, and pieces describing software or methods. The Winnower also publishes student essays, blogs, citizen science projects and discussions from the social media site reddit. The platform gives each document a digital object identifier, a sequence of characters that is used to uniquely identify electronic documents. It has grown steadily and has recently been acquired by another online platform, Authorea, where Dr Nicholson is now the chief research officer and is working to bring the two products together. “Authorea is trying to reinvent the research article so that it is not just a static PDF,” he said. Researchers can write in collaboration with others on the platform, store data that are linked to the text, and create interactive figures as well as submit their work to any journal with one click. “We are really trying to rethink how researchers communicate from the ground up to make it more appropriate for the web,” he added. Although the main focus is on original research, Authorea will still publish work that traditional journals do not. These grey outputs of research are receiving ever more attention as funding for science dwindles because without any record of work that has been done previously, precious money is spent repeating experiments unnecessarily. Rebecca Lawrence, managing director of F1000, a publisher that also provides immediate publication, open peer review and data deposition, said: “Funders have got a lot of evidence that a lot of what they fund just goes straight in the bin essentially because nobody gets to see it.” Then, funders inadvertently end up giving money for the same research to be performed again – albeit by others – “as nobody knows it has been done”. This is not only a waste of time and money. “It is also skewing our understanding of science generally because we are only seeing one part – what appears to be impactful or positive findings,” Dr Lawrence added. “We are missing the other half of the story, where things haven’t worked, and the null and negative results are the findings that might change our understanding and perspective.” Dr Lawrence said that researchers currently have no incentives to publish work that does not fit into the traditional journal article. Careers depend on getting meaningful results that are published in high-impact journals cited often by others, for example, and publishing null or negative findings takes time that researchers might prefer to spend writing up work that they consider to be more important. One funder that has been hoping to change this is the Wellcome Trust. Later this month, it will launch Wellcome Open Research, a platform supported by F1000 that offers grant recipients the chance to publish any work quickly before an open peer review. Authors can revise manuscripts as often as they wish on the platform, which affords early career academics an outlet to publish work that might not get accepted to a traditional journal. “We don’t call it a journal because we don’t have editors making decisions on behalf of the community,” Dr Lawrence explained. She added that funders play a “crucial role” in getting grey literature published and in shifting the incentive system in research. “There is a really unique relationship between the researcher and the funder. The researchers want to do [the] things that their funders think [are] a good thing to do,” she added. Phill Jones, director of publishing innovation at Digital Science, which owns Figshare, a site that academics use to share scholarly outputs, said that funders are becoming “steadily more firm” with their wish to make science more open and are encouraging researchers to share more data. “That will change the way that researchers behave in the lab,” he said. At the moment, many researchers refrain from making data available until they realise that they have to, at grant renewal time, for example. “In the future, they will prepare their data as they go along, so they will be ready to share it when the time comes. It could result in increased rigour,” he added. But Dr Jones stated that in his opinion the journal article was still here to stay. “The narrative of research is the journal article, and that is never going anywhere.” Dr Lawrence said that current funder platforms for grey literature could be a “stepping stone” to the creation of one big platform, owned by the research community, where academics publish their work for peer review. “Maybe you [will] get stamps as to the quality of the finding. You could imagine *Nature* and other places putting a stamp on an article rather than doing the actual publication of the article in the first place,” she said. ## POSTSCRIPT: Print headline: *Let there be light on hidden research efforts* ### Register to continue Why register? - Registration is free and only takes a moment - Once registered, you can read 3 articles a month - Sign up for our newsletter ### Subscribe Or subscribe for unlimited access to: - Unlimited access to news, views, insights & reviews - Digital editions - Digital access to *THE’s*university and college rankings analysis Already registered or a current subscriber? Login
true
true
true
Open publishing platforms that bring grey literature out of the dark promise to save money, reduce duplication and speed communication
2024-10-13 00:00:00
2016-11-17 00:00:00
https://www.timeshighere…les/treasure.jpg
article
timeshighereducation.com
Times Higher Education (THE)
null
null
9,015,509
http://hobowithalaptop.com/is-it-legal-to-be-a-digital-nomad-333
How to Become a Digital Nomad in 2024, Step-by-Step | Detailed Guide
Your Friendly Neighborhood Hobo
In spite of the volume of posts written about how to become a digital nomad in 2024, I find them relatively thin and not as straightforward as they could be (or they cost money). This is my take on the digital nomad lifestyle after dozens of personal calls with Hobo readers and being location independent for over a decade *–twenty-five years* if you count my constant relocations in Canada before picking up a one-way ticket to Thailand. Before you dig in, check out that time the Watch Mojo team featured me in one of their videos because there’s a lot of great tips in this video: Table of Contents *Article updated on June 13, 2024.* This guide is a small part of a broader information resource that you can find here. █ ## Why Become a Digital Nomad? Becoming a digital nomad has its perks, sure. Remote work, tropical places, travel the world and get paid while you’re on the go. Earn a higher value currency online from a country like the US, and live in another country where that currency goes further. Save more money every month for retirement, live almost anywhere in the world, and live well; *that’s the digital nomad lifestyle.* But becoming a digital nomad isn’t just for young progressive types. Learning to work remotely and become location independent could soon be your only option to maintain your livelihood. Ass-in-chair jobs are disappearing along with the office rentals that surround them. Blue collar *and* white collar jobs are threatened, and governments can’t keep on bailing everyone out. Right now small and medium-sized businesses are competing with larger competitors that are doubling down on automation and artificial intelligence, while choking down a new increased minimum wage mandate. As a result, businesses are downsizing or ditching their offices altogether, and hiring remote workers as opposed to on-site employees in an effort to reduce overhead and remain profitable. Becoming a digital nomad might just extend your career another 5 to 10 years. And it might be best to get ahead of that curve before the guillotine drops. ### Digital Nomads at a Glance In this article we’ll look at a bunch of logistics for absolute beginners to the nomadic lifestyle and provide enough information to help anyone figure out how to become a digital nomad. And don’t just take it from me –there’s a lot of *other* reasons why people are adopting a nomadic lifestyle. According to a survey of 500 digital nomads that was conducted by FlexJobs in late 2018: - Main reasons why people want to learn how to become digital nomads include work-life balance (73%), enjoy the freedom (68%), love to travel (55%), avoid office politics and distractions of a traditional work environment (43%), want to explore other cultures (37%), high cost of living in home country (30%), poor local job market in hometown (24%) - Top benefits of being a digital nomad are the flexible schedule (85%); no commuting (65%); freedom to live and work where I choose (65%), work-life balance (63%); no office politics (52%); no dressing up for work (51%) - Top challenges for digital nomads are finding reliable Wi-Fi (52%); finding a good place to work (42%), networking (35%), time zones (29%), work communications (20%) - 92% of digital nomads say the lifestyle is important to them - 88% report that being a digital nomad has had a huge improvement or positive impact on their lives - 18% of working nomads report making six figures or more and 22% make between $50,000 and $99,999. According to the Social Security Administration, the average U.S. worker today earns roughly $46,641 a year - 31% of working nomads make similar amounts of money, and 18% make *more*money as a digital nomad than when they worked traditionally - 38% say they feel less stressed financially being a digital nomad and 34% say there is no difference in financial stress than when they worked a traditional job You can find the original survey data by FlexJobs here. ### Nomadic Lifestyle, Explained This guide to location independence will cover everything you need to know to become a digital nomad and take part in what could be our generation’s fastest-growing lifestyle choice behind the tiny house movement, van life, and living in our parent’s basements. We already have tons of in-depth articles covering most of the concerns listed in the above survey results, so anything not covered in *this* article can be ironed out by reading our related content. ### Pros and Cons Before you become a digital nomad, you’ll likely want to measure the pros and cons of the lifestyle. Local nomads often face a wide range of time vampires that they never knew existed; that is the price one will pay for living a lifestyle that most people envy yet are too afraid to live. #### Perspectives Matter In order to keep your head up and your spirit high, you’ll need to have the right perspective. - Can you be a patient, methodical, opportunistic, self-motivated hustler? - Do you have a strong sense of purpose, goals, and what you want from life? - Can you adjust to situations quickly and compassionately? - Do you have decision making skills, planning skills, and the discipline to abide by them? - Are you able to tough it out if your laptop dies right before your bank card gets jammed in a machine at the Laos border during a visa run? (That probably won’t happen, but it happened to me in 2014). If you answered yes to most of these, you’ll fair much better when we look at the following pros and cons of work and travel. In reality, for every pro (glass half full) there is a con (glass half empty) so be mindful of how you perceive these pros and cons. Don’t let an over-eagerness force you to skim over hidden complexities and take them head on. #### The Pros and Cons of the Digital Nomad Lifestyle *Before you become a digital nomad, you’ll have to decipher the hype;* - Work from anywhere! (You’re pretty much homeless!) - Make your own schedule! (Crappy routine at best, responsible for your own success, and liable for your own failures) - Short work days! (Added pressure on executing the 80/20 rule effectively) - Choose to live anywhere you want in the world! (Struggle with visa requirements, time zones, generally meet clients face-to-face less often) - Make new connections! (Have less time or ‘presence’ for existing friends and family) - Unlimited earning potential! (Financial security is less predictable) - Finally time to start that side project! (Remove time from proven, repeatable income) - Hooray, palm trees and wanderlust! (Deal with periodic loneliness, unpredictable weather and infrastructure challenges) - Cheaper cost of living! (Cost benefits only while you’re living statically in one place, travel can be expensive) This an excerpt from Digital Nomad Escape Plan: From Cubicle to Chiang Mai, Thailand. Download it today, and keep it for later. ### Digital Nomad Guide Series Yeah, we’re cooler than sliced bread. Here’s a few guides you could add to Pocket for later, they’re mentioned throughout this article. - How to Get Part Time Remote Jobs Faster - Helpful Digital Nomad Skills for Work and Travel - Our Digital Nomad Packing List After 7+ Years on the Road - How to Get an Entry-Level Remote Job - Big Fat List of Unique Travel Gifts for Digital Nomads - Improve Work Communications with this List of 60+ Tools - How to Get Housesitting Jobs for Free Accommodation Globally - Guide to Digital Nomad Insurance - How to Speed Up Your Internet Connection - Tips for networking with other Digital Nomads - How to Become a Nomad Family and Travel with Kids - How to Handle Criticism for Becoming a Digital Nomad - 30 Obstacles to Becoming a Digital Nomad - Digital Nomad Reading List: A Collection of Amazon Best-Sellers These are only a handful of digital nomad blog posts we’ve written –tap one of the links above or explore Hobo with a Laptop for more. If you’re heading to Chiang Mai, Thailand specifically, we’ve also got a free 200-page guide for you. ## How to Become a Digital Nomad I officially decided to become a digital nomad in 2011, but the gears had already been moving in this direction for over a decade. I traveled a lot in my home country, relocating often. I moved out of my folks’ place just before my 19th birthday and I was a bit of a mess. I’d move for love, I’d move for a new job, I’d move for the culture shift, I’d move to tear the page out and start over. And every time I thought “I’ll stay this time”. It never happened. My first decade of moving around involved furnishing every apartment I moved into to retain a sense of normalcy, and then selling it all off at loss a year or two later when I decided to leave. It was only when I came across a digital nomad blog in 2011 while living in a condo in Toronto that I could finally put a label on what I thought was insanity. It was then that I decided I wanted to learn how to be a digital nomad. Relocating often makes you look like a flake when you can’t put a label on it. The definition for “digital nomad” rescued me. People could Google it. I was part of a group, I was sheltered from the shame of long-standing social norms. I was part of a new one. Today I’m 38, and I wish I’d known then what I know now about being location independent. LEARN MORE### 1. Logistics You’ve got your reasons for wanting to learn how to become a digital nomad, that’s none of my business. Let’s skip the why and jump into *how* you can become location independent. First we’ll look at the ingredients, then we’ll look at how they come together. #### Important Accounts and Government Documents All digital nomads are going to need a whole bunch of moving parts to stand on in order to be location independent. Here’s a complete list of all the accounts and government documents you need, at a minimum. You should start getting these items together at your earliest convenience: - Passport - Business license - Local bank account, PayPal account, and Transferwise Borderless Banking account *or*Payoneer account - Skype and/or Grasshopper for local number and VOIP - International internet connection - Earth Class Mail for a US mailing address - Nomad insurance - Driver’s License - Emergency fund - Doctor’s notes, prescription(s) #### Business License Every country has different rules for how much a sole proprietor can make before they need to start charging and paying taxes. If you make under a certain amount, you can forgo the business license and operate as a sole proprietor. On official documents and forms, a sole proprietor simply puts their full name in the “company” field. It’s as easy as that. Read our digital nomad taxes FAQ for US expats, or explore tax laws for your country on your own. Don’t overlook your business license, but don’t make it a barrier for entry, either. A lot of us jump first and ask questions later. By the time you’re making enough money for digital nomad taxes to be a concern, you can afford to play catchup. #### Nomad Taxes & Bookkeeping I highly recommend that you get in touch with a certified tax professional online by the time you’ve made your first $1,000 with your remote job, side hustle, blog –you name it. While you won’t need to file taxes right away in most cases, you *will* need to keep your books clean *or else*. We made the very costly mistake of not keeping our books organized over a number of years, and recently had to pay $10k CAD in taxes because we came up short with reciepts and so on. It was a total shit show. *Don’t leave money on the table, at the very least consider a service like TaxHub to protect your finances from government clutches.* #### Banking Solutions The reason there’s a whole pile of banking solutions on this list is because there’s no one-size-fits-all solution for international expats. It’s possible to sign up for these while you’re abroad, but I recommend doing so right away while you’re still in your home country. Apply for a credit card in your home country if you can. I haven’t had one for years because I’m debt free and prefer cash or prepaid Mastercards, but a credit card with travel miles will make your life easier and it’s great in emergencies. PayPal is commonly used for client invoicing, from there you can deposit your money into a bank account from your home country. You can only deposit money into a proper bank account if the address and name match perfectly to the one you signed up with PayPal. Transferwise Borderless Banking is the best international bank for US and UK digital nomads, Payoneer is the best nomad bank for everyone else (like Canadians, Indians, Filipinos, you name it). In Canada, there’s a new prepaid VISA card we use called KOHO that’s worth looking at. Both will allow you to create a legitimate bank account number in other countries so you can bank like a local –the main one being the US. Simply put; non-US citizens can open US bank accounts and accept money from US companies easily. It’s a must if you work online or have affiliate websites. Both digital nomad-friendly banks will ship you a physical bank card to wherever you might be. Learn more about Transferwise and Payoneer respectively. #### Local Number and VOIP Reliability is the name of the game; changing your number frequently or not having one at all isn’t wise. I suggest you prepay a full year or more in advance, and consider getting Grasshopper if you want the most comprehensive VOIP solution money can buy. Grasshopper is ideal if you’re looking for a 1-800 number. Location independent Canadians are also advised to look at Dell Voice, also known as Fongo. It’s a mobile app for iOS and Android that has free and paid options for a Canadian number. I have this as a backup. Some countries can port their existing number to Skype or Grasshopper –but if you come from a monopolistic country like Canada, you might want to settle for a 646 New York number or use Dell Voice (Fongo) as mentioned above. #### International Internet Connection How much does a digital nomad make? *Nothing if they don’t have internet access.* Don’t chance it; have backup plans for your backup plans if you’re going to be location independent. I could have left this in the nomad gear part of this guide, but it’s too important. I recommend Skyroam; a prepaid mobile hotspot that works internationally for up to 5 devices and doesn’t come with a contract. Pick one up as soon as you can and only top it up when you need it to save money. Unlimited day passes are $9 per day –money well spent. You can find out more about Skyroam here. Hobo with a Laptop readers get an exclusive discount with promo code HOBOLAPTOP. #### US Mailing Address Earth Class Mail is a private mailbox provider that is more nomad-friendly than a traditional PO box. Anyone from around the world can get a mailing address in the US with Earth Class Mail. Additional services that are helpful for location independent people include depositing any physical checks you get in the mail for you, scanning and emailing all paper mail, and forwarding physical deliveries to another address on file. Worth mention for marketers; Anyone with a mailing list is required by law to put their business address and Kit (formerly ConvertKit) happily volunteers theirs when you’re putting together a sequence. Everyone needs a mailing list, if you’re in the market to start one yourself, we use and love Kit. You can check them out here. #### Nomad Insurance “Nomad insurance” to me is any form of low-cost insurance that: - Serves everyone regardless of what country they come from - Provides both health and travel insurance - Covers the nomad gear in your backpack - Allows you to sign up online from anywhere, at any time I’ve used World Nomads for this because they’re quite comprehensive –although we’ve compared them with SafetyWing, another popular (and *very* economical) nomad insurance provider. Read our nomad insurance guide for more information. While you might not be ready to sign up for nomad insurance today, it’s important to do your research as early on as possible so you can budget for it. If you’re leaving next June, you could pay in advance and have coverage take effect next June. In other words, coverage begins when you want it to, not the moment you pay for it. #### Driver’s License In Asia, it’s not hard to rent a motorbike without a license but I am not suggesting you do. Get a driver’s license and a motorcycle license in your home country before you go, if possible. Most countries accept licenses from other countries to rent a vehicle, however, an international driver’s license would make things even easier. Beyond driving, a driver’s license is just another really handy form of ID to have on hand when applying for accounts online, renting a hotel room, and so on. Otherwise you might have to let them hold your passport, and technically that’s against the law in your home country. #### Emergency Fund Insurance is one thing, an emergency fund is another –and I recommend having both when you’re location independent. If you’re going to have a nomadic lifestyle, you never know when something could go wrong and you need to hop a flight back home, or dig into your savings because a client ripped you off. $5,000 USD is the baseline in my opinion, but the choice is yours. #### Budgeting and Travel Costs You’re going to need a roof over your head every night, a local SIM card for data tethering, food, drinks, money for leisure, tourist visas and visa extensions, transportation, possibly coworking space memberships, and enough scratch put aside for your next flight. NomadList is a great place to start figuring out the cost of living for digital nomads all over the world. However, it comes with a caveat; nomads are a salty bunch –the data on NomadList is user-generated, and it appears that a lot of people have put in false information. In addition to NomadList, break down your costs separately by yourself. Again, make sure that you budget for the following expenses: - Accommodation (apartment, Airbnb, or TrustedHousesitters account) - Local SIM card and data packages - Groceries and eating out - Leisure expenses and bar tabs - Tourist visas and extensions - Transportation like a motorbike rental or public transport - Coworking space membership - Planes, trains, and mini-buses (and onward tickets) The majority of the items on this list are self explanatory; I’ve listed them here so you don’t leave anything out. Now it’s your turn to figure out how much each of these items will cost you, for your unique situation. I will only elaborate on a few of them below. When you’re done creating your budget, compare it to your current expenses in your home country. Is it cheaper to become a digital nomad? If you plan on being a digital nomad *in Thailand*, we’ve priced out everything from living expenses to where to rent a bike, how to get an apartment, and what to do about your visa. Click here to see all posts on our website tagged “Thailand”. #### Accommodation As previously mentioned, there’s no one-size-fits-all data for how much your nomadic lifestyle is going to cost you. To get a good idea of accommodation costs, understand that sites like Agoda mark up their costs to make a profit and you will likely pay less when you’re on the ground in person. Regardless, exploring Agoda, Booking.com, and AirBnb are a great way to gauge an above-average cost of accommodation around your destination. If you plan for the worst you’ll be pleasantly surprised by local prices. Airbnb has been helpful for getting monthly apartments, especially during low season. In the Philippines we offered 8 – 10 days of the daily rate on Airbnb for an entire month. We’ve been living in the same apartment(s) for over 6 months to date. This makes traveling lighter and easier because we can leave a majority of our stuff at an apartment and travel with only a carry on. Currently we have two apartments in the Philippines and hop back and forth between them. Another popular form of accommodation is house sitting (and it’s a great way to travel *for free*). We wrote a guide to getting house sitting jobs, and in it we recommend Trusted Housesitters. Sign up for Airbnb right now and get some free credit on your account with our link, and/or check out TrustedHousesitters. #### Flights and Transportation Maybe you’ve got a points card, maybe you use a travel agent. If you’re making flight bookings online yourself we recommend SkyScanner and made a short guide that will show you how we get the lowest airfare when we fly. For buses and tours, you can usually reserve them in advance of your trip with Klook and Viator. They mark up your costs a little, but it will save you the hassle of negotiating with local travel agencies and language barriers. #### Onward Flight Tickets Now, onward tickets are another story. When flying into a country, bare in mind that some of them may have a requirement for an onward ticket –a flight booked *out* of the country at a later date. Some people have claimed they were denied entry into Thailand because they didn’t have one, but I’ve never had that problem. In my experience, entry into the Philippines *does* require an onward ticket for a flight booking scheduled to depart the country within 30 days of your arrival. If you plan on getting a visa extension to stay beyond 30 days once you’re inside a country, it’s a wasted flight. There’s a site called Onward Ticket that creates a real, temporary booking that’s valid for 48 hours. It’s pay per use, $12 a pop. All they do is book the flight and cancel it after 48 hours. By then, you should be through immigration. Sites like this go up and down all the time and if your flight to a country is 30 hours long, it’s hard to book an onward flight with a service like this while you’re *in the sky*. That’s what sets our recommendation apart –their bookings last up to 48 hours. Want 4-Hour Workweek for Free? Right now you can get the 4-Hour Workweek audio book *for free* if you sign up for Audible and grab a 30-day free trial. Cancel anytime, no questions asked. ### 2. Work & Finance How much money digital nomads make or how to raise money for a trip varies as much as it does before you become one. It boils down to what job you have, how many hours you work, and what you do on the side (if anything at all). Before I started to travel I approached working from home by putting work into two categories; client work (primary income) and side hustle (secondary, passive income). Your goal is to do both, and then grow your secondary income out until it fully replaces your primary income and has passive elements to it. Your primary income is a remote job you can do competently, don’t totally hate, and pays the bills. It’s the money you use in the beginning to pay all those expenses I listed earlier. Your secondary income can be *anything*. A hobby-turned-business. Going through a few ideas and failing *more* than a few times is almost a certainty, but eventually you’ll hit the jackpot. For my primary income, I was able to Jerry Maguire my old clients from a series of cubicle jobs and port them into my new, legitimate work from home consulting business. I worked remotely for years before becoming a digital nomad. When I came to Asia, I was able to work less and focus more time on my side projects which were mostly affiliate marketing sites. Once I got those going, I started Hobo with a Laptop to journal how I did it. My secondary income soon replaced my original primary income. Now I don’t have one primary income at all, I have several secondary incomes –a diversified revenue stream. Today I only take client work when I’m passionate about the project itself, it’s not about the money anymore. That base is covered. #### What Are You Going to Do? Discovering what kind of work you can do sustainably while living a nomadic lifestyle requires some soul-searching. What sort of nomad freelance jobs are you built for? To answer this question we had Jacob Lyda come on board and write a guest post about his experience figuring out what sort of digital nomad job he could do without doing his head in. Like most posts about how digital nomads make money, he took the career Venn diagram approach; you can view his post here. Got Followers? Monetize Them! Creating exclusive members-only content is a great way to build a reliable income with your blog *or* social media channels. Monthly Q and A livestreams, personal video “postcards” from the road, and other forms of exclusive content are a great way to monetize your blog or social media profiles with SubscribeStar. SubscribeStar is the leading Patreon alternative because of the size of its existing user base, its drastically lower fees, quick payouts, and its livestreaming features. A SubscribeStar account can be setup in minutes, check them out by clicking the button below. LEARN MORE#### How Digital Nomads Make Money Hobo with a Laptop is a resource that helps digital nomads make money through a wide range of ways; cryptocurrencies, blogging, nomad freelance, and remote jobs to name a few. This isn’t a “make money online” post –it’s too diverse a subject. Instead, we’ll look at how digital nomads make money at a high level. There’s a lot of different ways to create your primary income stream. The path of least resistance would be finding a way to turn what you’re already doing into an online job. If your employer won’t budge on allowing you to work remotely being a digital nomad, find the same job description at another company that does. FlexJobs (our review) is a reliable source of remote jobs and nomad freelance work –they screen out deadbeat employers so you don’t waste your time applying for garbage jobs, and that’s why we endorse them all over Hobo with a Laptop –we’re not being paid to do so, although we are affiliates for them and their competitors. If you lack work experience, consider an entry-level remote job or join “gig economy” sites like Upwork to support your nomadic lifestyle. Just be warned; Upwork and Fiverr take 20% from every cent you make. And that’s on top of your taxes. **Related:** I Need Money, Fast! How to Make Quick Cash We wrote a guide called How to Get Part Time Remote Jobs (and Where to Find Them) –and it packs enough information for anyone looking for their first remote job (even if it’s not part-time). In that guide we explain: - The 3 types of remote job sites - Benefits of working remotely - How to apply for remote jobs - How to mitigate competition - How to come up in search results - How to demonstrate your ability - How to write your cover letter Of course, there are plenty of other ways to make money online, but if you’re just starting out many of them would fall into a secondary income side hustle. **Read:** Location Independent Jobs That Are Always Hiring –this article links to *active search results* on FlexJobs for digital nomad jobs that you can apply for today. #### The Side Hustle Side hustles are too sporadic to rely on full-time until they’ve matured, and that could take years to make happen. A good side hustle won’t necessarily provide any online income right out of the gate, but because your primary income is handled you can take your time getting it right. Later on, it will become how to raise money for a trip with little effort. Some popular side hustles include: - Blogging with affiliate marketing and/or selling info products like ebooks or courses - Drop shipping with Amazon FBA, Shopify, or BigCommerce - Creating t-shirts, hats, posters, or carrier bags with sites like TeeSpring, Amazon Merch, or CafePress - Creating simple apps that do one thing really well –interviewing people inside an industry will improve your chances of success - Creating WordPress plugins and themes - Selling photos on sites like iStock or producing royalty-free music for sites like AudioHive –there’s a marketplace for everything and other examples include Photoshop templates for stationary, automation scripts, and OBS templates for YouTube and Twitch streamers - Starting a tour business; whatever you’re into could be the foundation of a tour –Nomad is Beautiful run photography tours, Remote Year and Nomad Cruise tour various locations with a gaggle of nomads, and Wandering Earl built an empire on running tours off his blog (after awhile, they run themselves) A key element of a successful side hustle is income that’s somewhat passive. You want to wake up most mornings after having made money while you slept, and you don’t want to create another job you’ll grow to hate. #### Marketing Yourself or Your Idea No matter which way you go, if you are going to make money online you’re going to have to learn some internet marketing skills. #### Search Engine Optimization Every marketplace, whether it’s for jobs (*Linked In, Upwork, Fiverr, etc.*) or digital products you’re selling (*Amazon, Etsy, etc.*), is a search engine. I recommend learning the basics of search engine optimization. Most search engines work the same way, but there are going to be certain techniques that are specific to each. We’ve written a few articles to help you understand SEO: In all of those guides, I recommend picking up a really cheap SEO tool called KeySearch. Almost everyone knows what MOZ or Ahrefs is, KeySearch is a lesser-known, cheaper alternative. The reason a paid keyword tool is so important is because it doesn’t matter what sort of traffic you *could* get for a keyword, it matters whether or not you have a chance of ranking for it. All the free keyword tools like Keywords Everywhere give you traffic volume, none give you competition scores. As in, tell you who is already ranking for said keyword and compare their metrics to your own. The trick is to go after keywords that are easier to rank for, with less competition, and throw down 5 – 10 different, lower traffic keywords into an article instead of going for a couple really high volume and super competitive ones. You can check out KeySearch here for more information. #### Starting a Blog The reason we cover blogging so much on Hobo with a Laptop is because of how much client work it got us *and* how much passive income it generates. We didn’t even have a “work with us” page when we started getting client requests. A nice website is your calling card, no matter what you’re doing. Blogging will open up the world to you; amazing new clients, passive income from affiliate referrals, and sponsorships that find you without any work on your own part. Once our site started to rank, sponsorships started to come out of the woodwork and fill out our contact form. We were making a few grand per month with sponsorships for doing very little before we had to slow down and start saying no. Here’s a few articles to help you start your blog: - Blogger Rate Card: How Much to Charge for Sponsored Content - What Should I Blog About? - How to Start a Blog with WordPress - Premium WordPress Theme Buyer’s Guide Now might be the time to take blogging even more seriously and build your own blog. **Fun fact:** Ugly websites tend to make more money because they make you seem more approachable. I’m not saying this to encourage you to build an ugly website –don’t, if you can avoid it. But if your site *is* ugly, don’t worry about it too much. A great blog is an asset that matures over time. The sooner you start one, the better –even if you’re still figuring out how to become a digital nomad and aren’t sure what to do with it. Domain age is a ranking factor. I originally started Hobo with a Laptop before I got on a plane, way back in 2012. I didn’t do much with it and left it idle until rebooting it in 2017 when my intentions were clearer. The time it sat around idle helped give us a head start on a few important ranking metrics. #### Saving Money By Any Means Necessary Before you become a digital nomad, you’re going to have to find ways to build your war chest and figure out how to raise money for a trip –and that usually means trading time for money. You don’t yet have the luxury of a lower cost of living, so working on a side hustle will be harder. Instead, I suggest your roll your sleeves up and work like there’s no tomorrow. “Work harder than most people are willing to today, and live like most people wish they could tomorrow” When you’ve got a part-time job, your second (or third) part time job(s) need to have flexible hours or be relatively passive. The gig economy is far bigger than Fiverr, Upwork, and Uber. There are plenty of other apps where you can trade a service for money while you’re still physically living in your home country to raise money for a trip. And no, *no survey apps*. Those things monopolize your precious time. The following apps will help you make extra money on the side: - Rent out a room in your apartment on Airbnb - Become a driver for Uber, Grab, or Lyft - Provide child care and drive kids around with Kango - Rent out your vehicle with Fluid Market - Sell your stuff with Mercari or Ebay - Sell your best photos on iStock - Become a mystery shopper with Field Agent - Walk dogs and pet sit with Wag - Get Hoopla, link it to your library card, and legally “borrow” books, movies, and audiobooks *for free* These apps are just a small sample of what’s out there, others are only a Google search away. **Related:** How to Get Money Fast #### Cutting Expenses Some expenses aren’t just expenses; they’re time vampires, too. They suck up your time and drain your motivation. Dopamine hits for modern slaves. You could always kill your time vampires right away to keep you motivated; sell the Playstation along with your television, cancel monthly app subscriptions, uninstall Mega Man from your smartphone, and switch from Starbucks to Nescafe. Make a list of all the things that give you a dopamine hit, and axe them. Dopamine is the best place to start* The Purge*, which I will get into later. You know the drill; $5 per day at Starbucks is $150 per month. 6 latte-free months is $1,000 in the savings account. Once you start trimming your spending habits, you could save enough money to forgo any additional work altogether. Look for couponing apps. If you’re creative about making ends meet and you’re determined to make your nomadic lifestyle become a reality, you will. It just might take time. I sat on the “runway”, so to speak, for two years before I bought a one-way ticket to Thailand. Patience is everything, play the long game intelligently. Keep your feelings out of it. ### 3. Purge Purging unnecessary physical possessions to be a digital nomad, I imagine, is like overcoming a heroin addiction. At first you’re like, *hell no*. For heroin use, it’s the needle that people can’t wrap their head around. Needles are grody and the sight of one makes me want to pass out. For a person who wants to be a digital nomad, it’s *“..but I can’t live without my coffee table! I’ve had it for 3 years and IKEA discontinued them! Oh, the suffering! Where will I put my doilies? I can’t do this!”.* After a few items go on Craigslist –“oh, *that sorta’ felt good*”*.* I guess losing your virginity might have been a better example, but I was trying to avoid being gross. It’s awkward at first, you don’t know the order of operations. It starts out as a painful experience. After you find a good home for the things you could have always done without, it starts to get addictive. *You become a purge addict.* When empty spots begin to appear in your apartment, it’s your friends calling you crazy that actually riles you to purge more. *“But you loved your PlayStation! You’re really doing this, aren’t you?”* F*ck right you are, because you’re a digital nomad. #### Beginning the Purge It’s time for another list; write a list of everything you use daily, and then write a second list of everything that didn’t make the first list. Start purging whatever is on the second list. If you don’t use something at least once a week, get rid of it. Leave family heirlooms with friends, your folks, or a weather-proof climate controlled storage unit. Keep them off the floor and don’t let plastic coverings touch them directly because they build moisture over time and can ruin furniture. I recommend you digitize personal items like old mementos, diaries, and love letters because whatever you leave behind will likely get rummaged through by your mom or other curious friends or family members. Personally, I digitized what I could and then had a big going away bonfire bash with my friends. I left behind a bankers box of photos with my mom, and that was that. Later I came back and tossed it down the garbage chute and wondered why I bothered –people took really pointless photo prints in the 90’s –it was all stuff I’d have deleted if it were digital, anyway. If you want to avoid that ashamed ‘knowing’ look siblings give you when they rummaged through your goods, scan it and burn it. Read love letters one last time, memorize them, and say goodbye. ### 4. Nomad Gear Now comes the fun part –what to put on your own digital nomad packing list. I still love browsing other people’s digital nomad packing lists; you know what I’m talking about. Those bloggers who take really great photos of their belongings all neatly laid out on the floor. Their backpacks are still shiny because they’ve never seen a monsoon. Their gear is the result of months of carefully read product reviews. *They’re so damn proud.* As they should be, they survived The Purge and rewarded themselves with their new nomad gear *–everything in their luggage has a purpose*. **Related:** Unique Travel Gifts for Digital Nomads #### What to Bring The same rules of The Purge apply to your digital nomad packing list; if you’re not going to use it often or it isn’t going to help enrich your life somehow, like help you make money online or simplify your life, it doesn’t go in your backpack or suitcase. And you’re in luck. I’ve written about nomad gear extensively with the help of my wife. We have several articles written about nomad gear: - Digital Nomad Packing Lists ( *His*and*Hers*) - 60+ Tools for Business Travel - Big Fat List of Unique Travel Gifts for Digital Nomads - Best GPS Trackers for Luggage, Kids, Pets, and Personal Safety - Tools to Speed Up Internet Between those six massive posts, you’re all set. We cover the best backpack for digital nomads, plug adapters, solar chargers, clothing for hot or cold temperatures, and a bunch of other things I know you’ll find helpful for your nomadic lifestyle. My top 5 digital nomad packing list items: - Skyroam Solis portable international hotspot - Osprey Farpoint 70 backpack with removable day pack - Dell XPS 13 9370 laptop - Any clothing made from Merino Wool - FlexJobs annual membership for emergency cash Beyond that, all I really need is my passport, keys, smartphone, and wallet. I’ve upsized and downsized a number of times over the years, but these top 5 make my nomadic lifestyle complete. Our other frequently-used items are just bathing suits, our two GoPro cameras and accessories, Blue Yeti mic, a lot of plain white t-shirts, and a few power banks. If we ever need anything else, we’re big fans of Lazada,* the Amazon of Asia*. It has a cash on delivery payment option so you never get ripped off. It’s where I get all my hard to find health supplements. #### Don’t Buy That New Smartphone.. Yet It’s common for those planning for living a nomadic lifestyle to want to upgrade their smartphone before they leave. Here’s why it’s a bad idea. The US and many Western countries don’t have high quality, cheaper smartphone brands available. In Asia for example, it’s easy to get a top notch smartphone made by Asus, Xiaomi, Oppo, Huawei, and others at a fraction of the cost of alternative flagship brands back home. Further to this, dual SIM phones are more expensive and hard to find in the US. In contrast, they’re much more common *and* more affordable in other countries. A dual SIM phone will allow you to play off multiple telephone companies conveniently, so you’re always tethering with the fastest data connection available. And finally, the bands and frequencies employed by US telephone companies and manufacturers are different from those used in other parts of the world. You’re going to be relying on mobile data, don’t chance it with a US smartphone abroad if you can avoid it. Instead, shop around online for smartphone models that are available in your destination country. That way when you arrive, you already have your selection in mind and can budget for it. ### 5. Destination Where you headed? Bali? Chiang Mai? Austin, Texas? Philippines? This one’s entirely on you, but I can throw my two cents into the hat. If you’re not totally comfortable to go completely solo or still learning how to become a digital nomad, I suggest you go to a popular digital nomad destination. That way you’ve got some other local nomads to make friends and network with. In a sense, the world is your own personal digital nomad academy campus. Nomads who have been at it a long time tend to hang with more settled nomads, and beginners are eager to mingle with others who are going through the same struggle. The digital nomads you meet in the beginning will likely make for fast friends of the life-long variety, and that’s why your first digital nomad destination is the most important one. It’s a launch pad for more than a lower cost of living and an improved quality of life. So choose wisely. Of all the top digital nomad destinations, I think most can agree with me that the following are some of the best places to start: - Chiang Mai, Thailand - Bali, Indonesia - Gran Canaria, Canary Islands - Medellín, Colombia - Budapest, Hungary - Prague, Czech Republic If you head to Thailand, it’s worth mention that Bangkok, Ao Nang in Krabi Province, Koh Lanta and Koh Tao are great places to live. I’d also recommend visiting Khao Sok National Park because they have floating bungalows and a pristine limestone-bottom freshwater lake. Currently I’m in Palawan, Philippines. I love it, I have a family here now which makes it easier. In the future my wife and I will be writing a number of guides for this nomad-friendly location. I’ve already discussed important considerations when choosing your first nomad destination way back in the Logistics section of this How to Become a Digital Nomad guide –so be sure to scroll back up and read it again when you’re ready to select a your first digital nomad destination. ### 6. Getting Over the Mindset Molehill The key takeaway from this nomadic lifestyle guide is to simplify, simplify, simplify –both mentally and materially. Keep an open mind, and don’t try to recreate your life back home, abroad. There’s no bigger mystery to becoming a digital nomad beyond developing the right perspective and expectations over time, and learning how to be a resourceful traveler in the countries you inhabit. The standalone act of becoming a digital nomad is like assembling IKEA furniture; it’s little more than an order of operations. Anyone can “crack the code” if they want to. Don’t mystify it, fear it, or build it up in your mind bigger than it should be. Whether you’re a graduate fresh out of university with only a gap year to spare –or a highschool dropout who feels like their prospects are limited, you’ve probably got all the life experience you need in your head to travel the world and get paid. For everything you might lack, there’s Skillshare and entry-level remote jobs for those with little or no work experience on their CV. You just read an entire guide that deconstructed and boiled down over six years of nomad life experience –there are no secrets, you now know everything you need to know about how to be a digital nomad. And to make it even easier, download the free *How to Become a Digital Nomad *checklist at the end of this article. It’s not much, but it will cover the basics you might otherwise forget to bring or plan for –and downloading it will add you to our mailing list so we can keep in touch. #### Yes, You Can! This last lap of the article speaks to a *very* small subset of readers who are on the fence and likely to consider giving up before making a commitment. The window shoppers and critics who have an “*I can’t, because..*” attitude. You know who you are. If you want to become a digital nomad, it’s time to check your mindset and use it to push yourself forward, not push yourself down. There’s a lot of obstacles to overcome before you can be a digital nomad. I won’t belittle them, I know it’s a tall order. **“I don’t have the skills you do!”** The worst thing you can do is try to reinvent yourself or learn an entirely new skill *if you’re not ready* just because someone told you that’s what they did, and it works. Because what works for them might not work for you –and that’s okay. *Let it be okay!* Let you be enough for you, just build on what you already know. Don’t get stuck in a loop of starting over from scratch. Don’t compare yourself to others. Learning how to become a digital nomad is already an emotional roller coaster. Our reasons for wanting to live a nomadic lifestyle overcomplicate things. For some reason, stress makes us look for holes to fill instead of seeing our existing marketable strengths clearly. If you’re good at something, learn how to sell that something. Like I said earlier, how digital nomads make money typically comes down to identifying 1-3 things you’re already good at, learning how to market them and then charge a fair price. “Marketing yourself” is rhetoric for learning some SEO and articulating what you offer on a website like a blog, Facebook group, Linked In, or Upwork because they all have a built-in search function. If you had to learn *one* new skill to compliment your existing skills, make it internet marketing –SEO, writing convincing *words*, and getting your message right. Don’t waste your time on *pretty* personal branding –focus on your substance. You don’t need to fake it ’till you make it, remote work is mostly results-oriented. Too many digital nomad wannabes, mommy bloggers, boss babes, startup bros, and “entrepreneurs” put their image before their substance and their success is imaginary. Don’t fall into that trap. The best way to succeed is by helping others overcome obstacles that you’ve already overcome. If you don’t learn how to sell yourself effectively, you’ll wind up working for someone else who can and will sell you like a crate of apples. Save the personal reinvention until after you reach the beach and you’ve lowered both your cost of living, and your stress levels. I went to college around 26 and took web design and Macromedia Flash animation courses. Neither serve me today because a lot has changed since I took them in 2010. My education is obsolete. Yes, the nerdy among you will say *“Macromedia?”*. To make my nomad life more difficult, I’m an ecommerce consultant who hasn’t touched ecommerce much since I became a nomad. My relevant “digital nomad skills” in 2013 were writing and communication, with a dibble of WordPress and Photoshop knowledge. *That’s all I started with.* Every year I invest time into online courses to upgrade my skills, it’s the only way to survive a digital nomad lifestyle long-term. When you look around Hobo with a Laptop you’re looking at a bunch of skills I learned to do myself after becoming a digital nomad. In my past life I used to pay other people on my team to do what I can do now. **“I’m a visible minority! I’m from the developing world!”** The digital nomad lifestyle is not a socio-economic class. The opportunity is there, but the outcome won’t be the same for everyone. You may have passport limitations, no money in your savings account, and no one close by to embrace the journey with you. Welcome to the club. At a minimum, $600 – $1,000 USD per month is more than enough for a digital nomad to live on a beach in Thailand or the Philippines (and it’s often a lot more than the locals make). Early adopters usually get under the bar before they move it, although some creativity may be required. For example; If you’re able to comfortably read this English blog post over an internet connection, odds are in your favor that you could: - Contact an SEO company in your country and ask to be a writer, as they often go through writers like water - Seek out successful bloggers, digital nomads, YouTubers, podcasters, or people who work from home –anyone who “gets it”– look for areas you can help them move up in their career, and then offer services appropriately - Apply for an entry-level remote job from a reputable business –simple, repetitive tasks that are typically reserved for low-level employees and interns; the lower wage won’t matter much when you’re living in a country where that currency goes further Nomad beginners with fewer online job options could make a liveable digital nomad salary off creating Pinterest pins on Fiverr, for instance. My wife Oshin, a Filipina digital nomad, made enough on Fiverr that she didn’t depend on anyone but herself financially. And she doesn’t consider it hell, either –she’s still doing it for our peers when she isn’t financially obligated to. If you want it bad enough, you do what you gotta’ do. Years back, I had done tasks for numerous friends and peers during the slow times because I was good at identifying ways I could help them, and then demonstrating it with my own online activities (our blog being the number one icebreaker). Although we’re not hiring and we don’t do job placements for our readers (that’s what this website is for), we and our peers frequently call upon freelancers we’ve built relationships with over the years. It’s not weird to cold approach people with a suggestion and a solution if it’s valuable. On US accounting; In some situations, non-US citizens are overlooked for remote jobs they’re qualified for. On the internet, the color of your skin is invisible. Although your billing address isn’t. In such cases, aspiring digital nomads from the developing world can get a US billing address with Earth Class Mail and a US bank account with Payoneer. It won’t get you a US Tax ID but it makes you easier to work with and still looks better on paper, if looking good on paper matters. Progress is happening, fast; Digital nomads who are successful today and started out 5 – 10 years ago didn’t have access to the free information and online tools you’ve got right here in front of you. Today, there’s free and cheap educational resources like Code Academy, iTunes U, TED talks, and Skillshare. There’s a whole internet of remote jobs out there. For every 10 rejections, there’s going to be one paycheck. **“I have kids!”** **“I’m a woman!”** **“I’m too old!”** I’ve met single income nomad families with young kids, 70-year old widowed retirees with successful ecommerce sites, solo female travelers who make really good money blogging, backpackers who are YouTube gods, plenty of single moms with stay at home mom jobs, and too many nomad freelancers from developing countries to count. (I mostly live in the Philippines with my wife, a country full of nomad virtual assistants —it’s how we met). Our kids are going to enter this world as digital nomads. Over a third of my own personal digital nomad community is composed of people of color, and a majority of them are women. In fact, according to that survey by FlexJobs, 70% of digital nomads are female. Again, that study is here. You can, you can, you can. If you give yourself *permission.* When FlexJobs asked working nomads to choose the career field they work in, these were the top 10 digital nomad jobs they reported: - Writing - Education & Training - Administrative - Customer Service - Art & Creative - Computer & IT - Consulting - Data Entry - Marketing - Project Management Legitimate online jobs like data entry, customer service, writing, creative, and internet marketing will level the playing field for most people reading this post, all over the world. I made the links above clickable so you can take a look at what they have available, *right now*. **Related: **Top 25 Businesses Hiring Remote Job Positions In spite of what some social justice warriors have told me in the comments of this blog, becoming a digital nomad has little to do with the privilege of your race, religion, gender, trust fund, or how the digital nomad lifestyle “is represented”. It’s not all fat white nerdy men lounging on beaches. That thinking is as long in the tooth as the *intersectional identity politics* that go with it. Special Discounts for Lonely Planet Check out Lonely Planet’s book shop for more travel information. With our link you are eligible for discounts other people won’t receive, and free shipping on orders over $40 USD (or $50 CAD). They often run *Buy One, Get One* deals –so check it out. #### Kill Your Idols & Choose Your Own Adventure Becoming a digital nomad is a *pretty new pet *for the newly converted, and many of us are so damned proud that we made the leap and abdicated convention that we want to inspire others to do the same. The proliferation of happy-go-lucky photos on Instagram are only a symptom of a greater effort that doesn’t translate well on Instagram –don’t look at the finger, look at where it’s pointing. Being a nomad is so subjective it can’t be distilled and taught because there are far too many variables. No one is going to have all the answers because *any* lifestyle is the sum of many parts. Even more so, age groups shake things up quite a bit. As per that earlier FlexJobs study; 27% of digital nomads identify as Millennials, 41% identify as Generation X (that’s where I’m at), and 32% identify as Baby Boomers. A 24 year-old programmer’s digital nomad lifestyle is going to vary dramatically from a copywriter who’s 49. Blog posts and “digital nomad courses” struggle under the weight of trying to be everything to everyone, as would any digital nomad coach. At most, a nomad business coach might prescribe courses to flesh out new skills, take on some of the work for you, make a few introductions, and be an accountability buddy –that’s how I approach it although I don’t take many nomad coaching requests. Travel Insurance, Simplified We recently reviewed __World Nomads__ and __Safetywing__, the top two travel insurance providers among long-term travelers and digital nomads. Safetywing is incredibly economical, but is it comprehensive enough for your needs? World Nomads offers more coverage, but is it *too much*? Find out, read our side-by-side comparison. LEARN MOREWant 4-Hour Workweek for Free? Right now you can get the 4-Hour Workweek audio book *for free* if you sign up for Audible and grab a 30-day free trial. Cancel anytime, no questions asked. ## In Conclusion Congrats, you just read 8,000 words and counting. If you made it this far, I commend your commitment. In this guide I did my best to cover all the nuts and bolts, but nobody can tell you what a nomad “should” be. That’s for you to make up as you go along. It’s a race where your only competition is yourself. Focus, good humour, and commitment are everything. We all have our reasons for being a nomad. For some, it’s scouring the world for their soul mate, or to incubate an idea and launch a startup. For some local nomads it’s to escape social unrest and get away from a toxic situation back home. If in the end, after all this, you decide *not* to become a nomad –at least you gave it an honest, hard look. There’s absolutely no shame in taking a pass, you can always pick this up in a few years, or not at all. Although by my guess, we’ll all be working nomads one day in one form or another. If you *do* decide to move forward with your dream lifestyle, I have one more article for you to read: How to Handle Criticism for Deciding to Become a Digital Nomad. It’s a timely read if you’re about to go through with it. To help you on your way, I’d also like to share my *How to Become a Digital Nomad checklist*; it’s a downloadable PDF you can print to keep track of your progress. It’s nothing fancy, but it should do the job. If you found our guide helpful or think it’s a constructive conversation-starter, I’d really appreciate it if you share it on Facebook, Reddit, Pinterest, Twitter –you name it. **Download the Checklist**. You’ll be automatically subscribed to our newsletter when you provide your email to Gumroad. Direct download. Did I leave anything out? If you have any questions about how to become a digital nomad with no experience or there’s a quote or point made in this article that stood out to you, please share it in the comments. Big love from Palawan. █
true
true
true
After 10 years a digital nomad, learn what to setup in advance, packing list, everything you need in 10,000 words.
2024-10-13 00:00:00
2024-10-11 00:00:00
https://hobowithalaptop.…/uploads/8-7.jpg
article
hobowithalaptop.com
Hobo with a Laptop
null
null
28,684,452
https://memod.com/MrBusiness/the-difference-between-risk-and-luck-in-investing-590
The difference between risk and luck in investing...
Brian Reid
# The difference between risk and luck in investing... Sep 24, 2021 · 2 mins read 0 Share **Luck and risk are two sides of the same coin**. It’s impossible to grasp one without understanding the other. Save Share We typically define risk in terms of questionable decisions that can lead to *bad* results. Luck, meanwhile, is when questionable decisions lead to *good* results. Save Share The concepts of risk and luck exist because **our actions cannot determine 100% of our outcomes**. In a world with almost eight billion people, the impact of others’ choices on us can be far greater than our own. Save Share **Risk forces us to recognize that certain things are outside of our control**. This information informs our decision-making so we can make appropriate adjustments. Experiencing good fortune has the opposite effect: **luck tricks us into thinking we’re in control**, which is dangerous. Save Share When it comes to investing, recognizing and managing risk are considered hugely important. The same can’t be true about luck. This is why you’ve never heard of a luck consultant. There’s no requirement to disclose lucky breaks in financial reports. **It’s a double standard.** Save Share The reason we discount luck is because we’re wired to identify patterns of *what works*. The prospect of cracking some secret formula and being able to repeat it for future gains is an irresistible narrative. **Luck stokes our ego and makes sense of chaos.** Save Share **A good investor should account for luck as much as risk**. VCs, for example, operate under the assumption that about 50% of all investments will fail. By factoring in the role of luck (success that is not structurally repeatable) you’ll be better equipped to navigate uncertainty. Save Share When the going is good, **people tend to minimize the amount of risk and luck involved equally**. The difference is that when risk halts your winning streak, it’s instantly clear what went wrong. It takes us much longer to see the role that luck played. Save Share **Risk can harm confidence** in decision-making, even though the outcome is simply clarifying reality and guiding you towards more informed choices. **Luck boosts confidence without improving ability**, setting off a vicious cycle where we leave no room for error and ignore luck's role. Save Share Being aware of risk and luck empowers you to recognize that you can’t control everything. This frees you up to focus on the few things you *can* control. Whether you learn to manage risk and luck, or choose to ignore them, **what you can’t do is avoid them**. Save Share 0
true
true
true
Luck and risk are two sides of the same coin. It’s impossible to grasp one without understanding the • 2 mins read
2024-10-13 00:00:00
2022-07-09 00:00:00
https://cdn.memod.com/images/hUWP7K.png
null
memod.com
memod.com
null
null
355,650
http://news.bbc.co.uk/2/hi/technology/7704709.stm
PC users to invent ideal machine
Maggie Shiels
By Maggie Shiels Technology reporter, BBC News, Silicon Valley | Creator 'LBOhnoes' wants music blasting out while they work | **Intel and manufacturer ASUS have launched a project asking people to say what they would like to see in a PC.** The companies are asking people to "dream the impossible" to help design the first community-designed PC. A website, WePC.com, has been set up to allow people to share and comment on ideas to "enable a global conversation about the ideal elements of a PC." Both companies insist the project is not simply cheap talk, saying there is a commitment to building the machine. "The spark for innovation can come from anywhere," said Intel's Mike Hoeffinger. He added that both companies have joined together "to tap into the creative energy of consumers...and give people a voice in the design of technology they use every day." Technology companies have always asked for customer feedback, but this is being billed as a new approach to product design and to customer involvement, says Lillian Lin, the director of marketing and planning at ASUS. "By empowering WePC.com users to play a role in the design process, we expect to deliver cutting-edge community-designed products that address a consumer vision of the dream PC," said Ms Lin. **"Ghetto blaster laptop"** The mission statement for WePC.com is simple :"You dream it. ASUS builds it. Intel inside it." The companies will also award prizes to some for their creative efforts | "Your designs, feature ideas and community feedback will be evaluated by ASUS and could influence the blueprint of an actual notebook PC built by ASUS with Intel inside," said the website. "Everyone is very aware there is a commitment from everyone involved," said Josh Mattison of Federated Media, which is involved in the marketing campaign. "If you start a conversation with your customers, the first step is knowing their voices will be heard and incorporating that into those companies' larger thought processes. That is absolutely something you can expect to see." The community will be divided into what Intel has called three "conversation groups". They will address three of the most popular consumer PC categories: netbooks, notebooks and gaming notebooks. WePC.com has urged users to let their imagination run wild. "There is no limit to creativity," said Mr Mattison. "Slickmachines" wants a laptop impervious to water damage | "And there is no forum quite like this for expressing that. Let those ideas flow, whether it's concerning something purely functional like battery life or something a bit more 'out there' like a computer needing a haircut every two weeks," he said. Some of the suggestions for the community-designed PC already include a ghetto blaster laptop with woofers and tweeters and a "happy laptop" that would wake the user up in the morning. It is unlikely that any consumer-inspired PC will make the market any time soon and it could be well into 2009 before the "dream PC" is turned into reality. | ## Bookmark with: What are these?
true
true
true
Intel and computer manufacturer ASUS are asking consumers to come up with their wildest ideas to help design the ideal PC.
2024-10-13 00:00:00
2008-11-02 00:00:00
null
null
null
BBC
null
null
13,114,637
https://blog.versionone.com/versionone-opensource-integrations-for-collaboration/
Catalyst Blog | Digital.ai
null
# Catalyst Blog #### Featured Post ## SEARCH & FILTER ## More From The Blog ## Search for ## Category ## Setting Up Security on Client-Side Scripting Learn how to protect client-side scripts from security threats. This guide covers essential practices, encryption techniques, and tools to enhance security. ## Crash Logs and Obfuscation: A Crash Course Learn how to debug iOS app crashes using dSYM files, understand their contents, and balance app security with effective crash reporting and analysis. ## Agile Development Process Models in Software Engineering Enhance your software development with Agile Process Models. Discover core principles, methodologies, and best practices to improve product quality. ## Guide: Agile Development Process Learn about the Agile Development Process in this comprehensive guide. Explore the greater details of roles in Agile teams, best practices, & measuring success. ## Examples of Client-Side Security and Threats Stay ahead of client-side security issues. Learn about example threats, implementation strategies, and tools to effectively safeguard your client applications. ## Client-Side Security Threats to be Aware Of Learn more about client-side security threats such as XSS and CSRF. Discover practical ways to secure the data accessed by your client-side apps. ## The Security Impact of Good Renaming Explore the impact of effective code renaming on app security. Learn how Organic Renaming enhances protection against reverse engineering and analysis. ## Digital.ai Application Security: First to Support iOS 18 GA Digital.ai Application Security is first-in-market for protection for iOS 18 GA apps, offering advanced security features & seamless implementation. ## Platform Engineer vs. Software Engineer: What’s the Difference? Explore the differences between Platform and Software Engineers. Understand their unique roles, skills, and the impact they have on development processes.
true
true
true
Read our blogs to discover how to unify, secure, and generate predictive insights across the software lifecycle to enhance business value.
2024-10-13 00:00:00
2024-05-10 00:00:00
https://digital.ai/wp-co…using-tablet.png
article
digital.ai
Digital.ai
null
null
8,528,808
https://medium.com/@chrishiggins/on-writing-experimentation-and-the-magazine-23d9b6271bfe
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,384,830
https://github.blog/changelog/2023-11-22-deprecation-notice-security-advisories-in-private-repositories/
Deprecation notice: security advisories in private repositories · GitHub Changelog
Wp-Block-Co-Authors-Plus-Coauthors Is-Layout-Flow
Shortly after releasing Copilot content exclusions on November 8, 2023, our team observed that the feature was causing clients to be incorrectly blocked from using Copilot. This necessitated an immediate rollback of this feature. **What Happened?** Once the feature was enabled for all Copilot Business customers, we observed a spike in errors and some end-users being completely blocked from using Copilot. The problem was related to the way content exclusions policies are fetched from the client. **Current Actions and Next Steps:** Our engineering team is engaged in deploying the necessary fixes. We have identified the faulty code in the client and are also deploying more verifications both server and client side to ensure this does not happen again. However, we want to approach the reintroduction of this feature with caution. Customers who had previously setup a content exclusions configuration are not affected by the rollback. **We expect to re-deploy the feature within the next few weeks.** Join the discussion within GitHub Community.
true
true
true
As of February 15th, 2024, you will no longer be able to create security advisories in private repositories. Formerly published advisories will no longer be available. This change does not…
2024-10-13 00:00:00
2023-11-22 00:00:00
https://github.blog/wp-c…g?fit=1200%2C630
article
github.blog
The GitHub Blog
null
null
26,471,947
https://www.theguardian.com/uk-news/2021/mar/15/cap-on-trident-nuclear-warhead-stockpile-to-rise-by-more-than-40
Cap on Trident nuclear warhead stockpile to rise by more than 40%
Dan Sabbagh
Britain is lifting the cap on the number of Trident nuclear warheads it can stockpile by more than 40%, Boris Johnson will announce on Tuesday, ending 30 years of gradual disarmament since the collapse of the Soviet Union. The increased limit, from 180 to 260 warheads, is contained in a leaked copy of the integrated review of defence and foreign policy, seen by the Guardian. It paves the way for a controversial £10bn rearmament in response to perceived threats from Russia and China. The review also warns of the “realistic possibility” that a terrorist group will “launch a successful CBRN [chemical, biological, radiological or nuclear] attack by 2030”, although there is little extra detail to back up this assessment. It includes a personal commitment from Johnson, as a last-minute addition in the foreword, to restore foreign aid spending to 0.7% of national income “when the fiscal situation allows”, after fierce criticism of cuts in relief to Yemen and elsewhere. The 100-page document says the increase in the nuclear warheads cap is “in recognition of the evolving security environment” and that there are “developing range of technological and doctrinal threats”. Campaigners warned the UK was at risk of starting a “new nuclear arms race” at a time when the world is trying to emerge from the Covid pandemic. Kate Hudson, the general secretary of the Campaign for Nuclear Disarmament (CND), said: “With the government strapped for cash, we don’t need grandiose, money-wasting spending on weapons of mass destruction.” The commitment is one of the most notable in the integrated review, a landmark post-Brexit review of defence and foreign policy, which also includes: - A clear statement that Russia under Vladimir Putin represents an “active threat” but nuanced language on China, which is described as posing a “systemic challenge” in a manner unlikely to please Conservative hawks on the party’s backbenches. - A commitment to launch an additional sanctions regime giving the UK “powers to prevent those involved in corruption from freely entering the UK or channelling money through our financial system” - An aspiration for the UK to be a “soft power superpower” with praise for the BBC as “the most trusted broadcaster worldwide” despite Downing Street boycotting the broadcaster last year. The British monarchy is also cited as contributing. The review began in the aftermath of the 2019 general election and is intended to help define the prime minister’s “global Britain” vision and shape future strategic direction, after leaving the EU, until 2030. It contains only a handful of passing references to the bloc, arguing instead for an “Indo-Pacific tilt” in which the UK deepens defence, diplomatic and trade relations with India, Japan, South Korea and Australia in opposition to China. “We will be the European partner with the broadest and most integrated presence in the Indo-Pacific,” the review says, while arguing that investing in cyberwarfare capabilities and deploying the new Queen Elizabeth aircraft carrier in the region later this year will help send a message to Beijing. But it is the commitment to significantly increase the cap on nuclear warhead numbers that is the most significant development, coming after the UK promised to run down stockpiles following the end of the cold war. Britain has far fewer warheads stockpiled than Russia, estimated to have 4,300, the US on 3,800 or China, which has about 320. But each warhead the UK holds is estimated to have an explosive power of 100 kilotons. The atomic bomb dropped on Hiroshima at the end of the second world war was about 15 kilotons. “A minimum, credible, independent nuclear deterrent, assigned to the defence of Nato, remains essential in order to guarantee our security and that of our allies,” the UK review says in a section explaining the context for the stockpile increase. Stewart McDonald, the defence spokesman for the Scottish National party, which is opposed to Trident renewal, accused the government of being wedded to an outdated defence policy: “For the prime minister to stand up and champion the international rules-based system before announcing in the same breath that the UK plans to violate its commitments to the international treaty on non-proliferation beggars belief.” China lobby groups said they believed the review did not go far enough. A spokesperson for the Inter-Parliamentary Alliance on China said Beijing should not have been omitted from the list of countries engaged in hostile state activities. “This is despite repeated Chinese state-backed cyber-attacks on UK targets and attempts by Chinese government agents to intimidate and threaten UK residents on British soil – and in stark contrast to Russia, Iran and other authoritarian states that have also targeted the UK,” the spokesperson added. Further details of the plans for the armed forces will be contained in an official defence command paper to be published on Monday. That is expected to confirm a cut in the size in the British army to 72,500 – not mentioned in the review document – and investments in pilotless killer drones. One idea not previously mentioned is a tentative proposal to create a citizen’s volunteer force – a “civilian reservist cadre” – potentially to work alongside the military in response to the future crises on the scale of the pandemic.
true
true
true
Exclusive: Boris Johnson announcement on Tuesday will end 30 years of gradual disarmament
2024-10-13 00:00:00
2021-03-15 00:00:00
https://i.guim.co.uk/img…5b4b862884bae177
article
theguardian.com
The Guardian
null
null
9,092,407
http://www.topromp.com/12-ways-master-online-dating-traveling-world/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,960,724
https://surkatty.org/2016/01/23/on_abuse_mitigation.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,548,901
https://www.wsj.com/articles/amazon-com-plans-first-air-cargo-hub-1485901557
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,886,869
https://www.wired.com/2016/06/lonely-transatlantic-journey-self-sailing-solar-ship/
The Lonely Transatlantic Journey of a Self-Sailing Solar Ship
Jack Stewart
Nearly 400 miles off the Massachusetts coast, a self-sailing, solar-powered, boat is bobbing along all alone. Looking like a very lonely, very miniature cargo ship, it's at the start of a voyage that will hopefully take it more than 3,000 miles across the Atlantic and into the record books. *Solar Voyager* launched from Gloucester, Massachusetts, at the beginning of the month, and is headed, very slowly, toward Portugal. If it survives, it will be the world’s first autonomous surface vessel to cross the ocean, and the first to do it on solar power. It’s not the first to attempt the crossing, however, and the others have not fared well. “Several people have tried, and they didn’t make it,” says Isaac Penny, one of the boat's builders. “A lot of things could go wrong.” Unlike Bertrand Piccard’s upcoming transatlantic flight on the sun-powered *Solar Impulse 2*, the *Solar Voyager* has no human navigator. The computer in control is following pre-programmed GPS waypoints. Every 15 minutes, it reports its position online for everyone to see, along with data like speed, solar power generated, battery level, and local temperature. At 18 feet long, *Solar Voyager* is roughly the size of an ocean kayak, and looks reasonably robust until you see it pictured next to another ship. The aluminum shell is just 2.5 feet across. Early prototypes built from plastic proved too fragile for the ocean conditions in the Atlantic, where waves can easily reach 30 feet high in a storm, and cause trouble even for cruise ships. “It’s pretty rough out there,” says Penny. Almost all of the available upper surface of the wee vessel is given over to solar panels, 280 Watts worth. Below deck are 2.4-kWh batteries to run at night. A Go-Pro is set up to take pictures and short videos which will (hopefully) be retrieved when the boat next encounters a human. That may take a while. *Solar Voyager's* two propellers provide a max speed under five mph, so Penny expects the crossing to take around four months, weather dependent. Penny and his fellow engineer Christopher Sam Soon have day jobs working on medical surgery robots. They built *Solar Voyager* in their free time, undertaking this voyage simply for the challenge. They kept the boat deliberately simple---less complexity means fewer parts that can fail. They skipped the sophisticated charging algorithms to maximize battery storage and allow for overnight sailing, as that would require extra sensors. As it is, the boat just charges as much as it can, when it can, and sails as far as possible overnight. Once the battery's tapped, it drifts along until the sun comes up, hopefully not too far off course. “We have a lot of redundancy in the system,” says Penny. The solar panels are split, so if one part fails the other will still generate electricity. Thanks to dual propellers and rudders, the journey won't be skunked if any one part gets tangled or fouled. “It means that it doesn’t go as fast as it could, but it’s more likely that it will get there,” Penny says. In 2013, a similarly autonomous and solar-powered boat dubbed *Scout* made it 1,300 miles from its starting point in Rhode Island, before losing all contact with its team of builders near where the *Titanic* went down. That's the best any autonomous vehicle's done so far, but manned crossings have been more successful. In 2007, the catamaran *Sun21* made it across the Atlantic completely under solar power. In 2012, the giant *MS Tûranor* travelled 37,286 miles around the world powered by the Sun. Penny welcomes the challenge, inspired by the tales of other explorers. “There was a time when I was looking for work, and I may have read too many adventure books,” he jokes. If *Solar Voyager* makes history, Penny and Sam plan to fly to Lisbon to witness landfall. Penny says they're not interested in going for a round-the-world trip, but because *Solar Voyager* is powered by the sun, it is theoretically capable of sailing forever, at at least until something breaks or it gets swallowed by a whale. But for now, the team will settle for a little European vacation.
true
true
true
Solar Voyager is an 18-foot, self-sailing, sun-powered boat headed for Portugal.
2024-10-13 00:00:00
2016-06-10 00:00:00
https://media.wired.com/…ter-Harbor-1.jpg
article
wired.com
WIRED
null
null
14,292,647
https://adland.tv/adnews/bolcom-creates-electric-car-made-223-items-web-shop/901471414
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,509,322
https://gravitational.com/blog/troubleshooting-kubernetes-networking/
Troubleshooting Kubernetes Networking Issues
Sasha Klizhentas
Teleport Blog - Troubleshooting Kubernetes Networking Issues - Apr 26, 2022 # Troubleshooting Kubernetes Networking Issues ## Troubleshooting Kubernetes This is the first of a series of blog posts on the most common failures we've encountered with Kubernetes across a variety of deployments. In this first part of this series, we will focus on networking. We will list the issue we have encountered, include easy ways to troubleshoot/discover it and offer some advice on how to avoid the failures and achieve more robust deployments. Finally, we will list some of the tools that we have found helpful when troubleshooting Kubernetes clusters. ### Listen to this blog post ## Network Issue: Traffic forwarding and bridging Kubernetes supports a variety of networking plugins and each one can fail in its own way. At its core, Kubernetes relies on the Netfilter kernel module to set up low level cluster IP load balancing. This requires two critical modules, IP forwarding and bridging, to be on. ### Kernel IP forwarding IP forwarding is a kernel setting that allows forwarding of the traffic coming from one interface to be routed to another interface. This setting is necessary for Linux kernel to route traffic from containers to the outside world. #### How the failure manifests itself Sometimes this setting could be reset by a security team running periodic security scans/enforcements on the fleet, or have not been configured to survive a reboot. When this happens networking starts failing. Pod to service connection times out: ``` * connect to 10.100.225.223 port 5000 failed: Connection timed out * Failed to connect to 10.100.225.223 port 5000: Connection timed out * Closing connection 0 curl: (7) Failed to connect to 10.100.225.223 port 5000: Connection timed out ``` Tcpdump could show that lots of repeated SYN packets are sent, but no ACK is received. #### How to diagnose ``` # check that ipv4 forwarding is enabled sysctl net.ipv4.ip_forward # 0 means that forwarding is disabled net.ipv4.ip_forward = 0 ``` #### How to fix ``` # this will turn things back on a live server sysctl -w net.ipv4.ip_forward=1 # on Centos this will make the setting apply after reboot echo net.ipv4.ip_forward=1 >> /etc/sysconf.d/10-ipv4-forwarding-on.conf ``` ### Bridge-netfilter The bridge-netfilter setting enables iptables rules to work on Linux bridges just like the ones set up by Docker and Kubernetes. This setting is necessary for the Linux kernel to be able to perform address translation in packets going to and from hosted containers. #### How the failure manifests itself Network requests to services outside the Pod network will start timing out with destination host unreachable or connection refused errors. #### How to diagnose ``` # check that bridge netfilter is enabled sysctl net.bridge.bridge-nf-call-iptables # 0 means that bridging is disabled net.bridge.bridge-nf-call-iptables = 0 ``` #### How to fix ``` # Note some distributions may have this compiled with kernel, # check with cat /lib/modules/$(uname -r)/modules.builtin | grep netfilter modprobe br_netfilter # turn the iptables setting on sysctl -w net.bridge.bridge-nf-call-iptables=1 echo net.bridge.bridge-nf-call-iptables=1 >> /etc/sysconf.d/10-bridge-nf-call-iptables.conf ``` ## Firewall rules block overlay network traffic Kubernetes provides a variety of networking plugins that enable its clustering features while providing backwards compatible support for traditional IP and port based applications. One of most common on-premises Kubernetes networking setups leverages a VxLAN overlay network, where IP packets are encapsulated in UDP and sent over port 8472. #### How the failure manifests itself There is 100% packet loss between pod IPs either with lost packets or destination host unreachable. ``` $ ping 10.244.1.4 PING 10.244.1.4 (10.244.1.4): 56 data bytes --- 10.244.1.4 ping statistics --- 5 packets transmitted, 0 packets received, 100% packet loss ``` #### How to diagnose It is better to use the same protocol to transfer the data, as firewall rules can be protocol specific, e.g. could be blocking UDP traffic. `iperf` could be a good tool for that: ``` # on the server side iperf -s -p 8472 -u # on the client side iperf -c 172.28.128.103 -u -p 8472 -b 1K ``` #### How to fix Update the firewall rule to stop blocking the traffic. Here is some common iptables advice. ## AWS source/destination check is turned on AWS performs source destination check by default. This means that AWS checks if the packets going to the instance have the target address as one of the instance IPs. Many Kubernetes networking backends use target and source IP addresses that are different from the instance IP addresses to create Pod overlay networks. #### How the failure manifests itself Sometimes this setting could be changed by Infosec setting account-wide policy enforcements on the entire AWS fleet and networking starts failing: Pod to service connection times out: ``` * connect to 10.100.225.223 port 5000 failed: Connection timed out * Failed to connect to 10.100.225.223 port 5000: Connection timed out * Closing connection 0 curl: (7) Failed to connect to 10.100.225.223 port 5000: Connection timed out ``` Tcpdump could show that lots of repeated SYN packets are sent, without a corresponding ACK anywhere in sight. #### How to diagnose and fix Turn off source destination check on cluster instances following this guide. ## Pod CIDR conflicts Kubernetes sets up special overlay network for container to container communication. With isolated pod network, containers can get unique IPs and avoid port conflicts on a cluster. You can read more about Kubernetes networking model here. The problems arise when Pod network subnets start conflicting with host networks. #### How the failure manifests itself Pod to pod communication is disrupted with routing problems. ``` $ curl http://172.28.128.132:5000 curl: (7) Failed to connect to 172.28.128.132 port 5000: No route to host ``` #### How to diagnose Start with a quick look at the allocated pod IP addresses: ``` $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE netbox-2123814941-f7qfr 1/1 Running 4 21h 172.28.27.2 172.28.128.103 netbox-2123814941-ncp3q 1/1 Running 4 21h 172.28.21.3 172.28.128.102 testbox-2460950909-5wdr4 1/1 Running 3 21h 172.28.128.132 172.28.128.101 ``` Compare host IP range with the kubernetes subnets specified in the apiserver: ``` $ ip addr list 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:2c:6c:50 brd ff:ff:ff:ff:ff:ff inet 172.28.128.103/24 brd 172.28.128.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe2c:6c50/64 scope link valid_lft forever preferred_lft forever ``` IP address range could be specified in your CNI plugin or kubenet pod-cidr parameter. #### How to fix Double-check what RFC1918 private network subnets are in use in your network, VLAN or VPC and make certain that there is no overlap. Once you detect the overlap, update the Pod CIDR to use a range that avoids the conflict. ## Troubleshooting Tools Here is a list of tools that we found helpful while troubleshooting the issues above. ### tcpdump Tcpdump is a tool to that captures network traffic and helps you troubleshoot some common networking problems. Here is a quick way to capture traffic on the host to the target container with IP 172.28.21.3. We are going to join the one container and will be trying to reach out another container: ``` kubectl exec -ti testbox-2460950909-5wdr4 -- /bin/bash $ curl http://172.28.21.3:5000 curl: (7) Failed to connect to 172.28.21.3 port 5000: No route to host ``` On the host with a container we are going to capture traffic related to container target IP: ``` $ tcpdump -i any host 172.28.21.3 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 20:15:59.903566 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10056152 ecr 0,nop,wscale 7], length 0 20:15:59.903566 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10056152 ecr 0,nop,wscale 7], length 0 20:15:59.905481 ARP, Request who-has 172.28.21.3 tell 10.244.27.0, length 28 20:16:00.907463 ARP, Request who-has 172.28.21.3 tell 10.244.27.0, length 28 20:16:01.909440 ARP, Request who-has 172.28.21.3 tell 10.244.27.0, length 28 20:16:02.911774 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10059160 ecr 0,nop,wscale 7], length 0 20:16:02.911774 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10059160 ecr 0,nop,wscale 7], length 0 ``` As you see there is a trouble on the wire as kernel fails to route the packets to the target IP. Here is a helpful intro on tcpdump. ### netbox Having a lightweight container with all the tools packaged inside can be helpful. ``` FROM library/python:3.3 RUN apt-get update && apt-get -y install iproute2 net-tools ethtool nano CMD ["/usr/bin/python", "-m", "SimpleHTTPServer", "5000"] ``` Here is a sample deployment: ``` apiVersion: apps/v1beta1 kind: Deployment metadata: labels: run: netbox name: netbox namespace: default spec: replicas: 2 selector: matchLabels: run: netbox strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: run: netbox env: kube spec: nodeSelector: type: other containers: - image: quay.io/gravitational/netbox:latest imagePullPolicy: Always name: netbox securityContext: runAsUser: 0 terminationGracePeriodSeconds: 30 ``` ### Satellite Satellite is an agent collecting health information in a Kubernetes cluster. It is both a library and an application. As a library, satellite can be used as a basis for a custom monitoring solution. It’ll help troubleshoot common network connectivity issues including DNS issues. ``` $ satellite help usage: satellite [<flags>] <command> [<args> ...] Cluster health monitoring agent Flags: --help Show help (also see --help-long and --help-man). --debug Enable verbose mode Commands: help [<command>...] Show help. agent [<flags>] Start monitoring agent status [<flags>] Query cluster status version Display version ``` Satellite includes basic health checks and more advanced networking and OS checks we have found useful. ## Teleport cybersecurity blog posts and tech news Every other week we'll send a newsletter with the latest cybersecurity news and Teleport updates. ## Conclusion We have spent many hours troubleshooting kube endpoints and other issues on enterprise support calls, so hopefully this guide is helpful! While these are some of the more common issues we have come across, it is still far from complete. You can also check out our Kubernetes production patterns training guide on Github for similar information. Please feel free to suggest edits, add to them or reach out directly to us [email protected] - we’d love to compare notes! You can also follow us on Twitter @goteleport or sign up below for email updates to this series. We have productized our experiences managing cloud-native Kubernetes applications with Gravity and Teleport. Feel free to reach out to schedule a demo. ### Tags ### Teleport Newsletter Stay up-to-date with the newest Teleport releases by subscribing to our monthly updates.
true
true
true
This is part 1 of our series on Troubleshooting Kubernetes, in this part we focus on networking and list the issue we have encountered, including easy ways to troubleshoot.
2024-10-13 00:00:00
2022-04-26 00:00:00
https://goteleport.com/b…@2x.aefeff3f.png
website
goteleport.com
Goteleport
null
null
10,801,786
http://arstechnica.com/cars/2015/12/bmw-thinks-the-future-of-car-ui-is-gesture-control/
BMW thinks the future of car UI is gesture control
Jonathan M Gitlin
BMW has just given us a brief teaser ahead of CES next week. The German automaker will be bringing a new concept car that shows off the company's latest thinking when it comes to interior design and the future of the car user interface. The car, a convertible variant of the extremely clever i8 hybrid sports car, uses an evolution of the gesture control that we first saw this year in the new 7 Series. Called AirTouch, it uses sensors embedded in the dash near the car's main information display that pick up three-dimensional hand movements, allowing the driver to interact with the infotainment system as if it were a touchscreen—without ever leaving their fingerprints on the LCD. Both driver and front passenger have buttons to activate AirTouch, and BMW says that the new system reduces the number of steps needed to select different functions or options within iDrive (BMW's infotainment system). In part, it does this by preselecting downstream steps as one works through a hierarchy tree, allowing drivers to devote more of their focus and concentration on the task of driving.
true
true
true
AirTouch will appear in an i8 convertible at CES next week.
2024-10-13 00:00:00
2015-12-28 00:00:00
https://cdn.arstechnica.…logo-512_480.png
article
arstechnica.com
Ars Technica
null
null
32,499,034
https://www.freesoft.org/CIE/
Connected: An Internet Encyclopedia
null
Welcome! The Internet Encyclopedia is my attempt to take the Internet tradition of open, free protocol specifications, merge it with a 1990s Web presentation, and produce a readable and useful reference to the technical operation of the Internet. Some of my favorite parts are the essays on Ping and Traceroute and the CIDR and DNS sections of the Course. I'd like to thank all those who have expressed interest and support for this project. Brent Baccala, Editor Connected: An Internet Encyclopedia [email protected] April, 1997
true
true
true
null
2024-10-13 00:00:00
1997-01-01 00:00:00
null
null
null
null
null
null
2,873,208
http://www.theequitykicker.com/2011/08/11/tech-companies-create-small-value-fast-but-take-the-same-time-as-biotech-to-get-to-ipo
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,021,879
https://dan.enigmabridge.com/unbreakable-encryption-with-secure-hardware-and-geopolitics/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
39,049,780
https://www.cnbc.com/2024/01/16/openai-quietly-removes-ban-on-military-use-of-its-ai-tools.html
OpenAI quietly removes ban on military use of its AI tools
Hayden Field
OpenAI has quietly walked back a ban on the military use of ChatGPT and its other artificial intelligence tools. The shift comes as OpenAI begins to work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools, Anna Makanju, OpenAI's VP of global affairs, said Tuesday in a Bloomberg House interview at the World Economic Forum alongside CEO Sam Altman. Up until at least Wednesday, OpenAI's policies page specified that the company did not allow the usage of its models for "activity that has high risk of physical harm" such as weapons development or military and warfare. OpenAI has removed the specific reference to the military, although its policy still states that users should not "use our service to harm yourself or others," including to "develop or use weapons." "Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world," Makanju said. An OpenAI spokesperson told CNBC that the goal regarding the policy change is to provide clarity and allow for military use cases the company does agree with. "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property," the spokesperson said. "There are, however, national security use cases that align with our mission." The news comes after years of controversy about tech companies developing technology for military use, highlighted by the public concerns of tech workers — especially those working on AI. Workers at virtually every tech giant involved with military contracts have voiced concerns after thousands of Google employees protested Project Maven, a Pentagon project that would use Google AI to analyze drone surveillance footage. Microsoft employees protested a $480 million army contract that would provide soldiers with augmented-reality headsets, and more than 1,500 Amazon and Google workers signed a letter protesting a joint $1.2 billion, multiyear contract with the Israeli government and military, under which the tech giants would provide cloud computing services, AI tools and data centers. Don't miss these stories from CNBC PRO: *Tesla versus BYD: Analysts prefer one of them — giving it up to over 70% upside**Goldman says small caps to beat large caps this year. 10 cheap smaller stocks to buy**DoubleLine's Gundlach sees 'very painful' economic downturn, S&P 500 may be forming 'double top'**'One of the best valuations for AI': Buy the dip in this Big Tech stock, strategist says*
true
true
true
OpenAI has quietly walked back a ban on the military use of ChatGPT and its other artificial intelligence tools.
2024-10-13 00:00:00
2024-01-16 00:00:00
https://image.cnbcfm.com…57&w=1920&h=1080
article
cnbc.com
CNBC
null
null
2,152,834
http://terrordome.ca/blog/hackers-and-painters-predictions
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,201,125
https://www.newstatesman.com/blogs/media/2012/04/times-nightjack-hack-leveson
The Times and NightJack: an anatomy of a failure
David Allen Green
(This post sets out what Lord Justice Leveson has since described as a “mastery analysis” at paragraph 1.33 of his Report.) The award-winning “NightJack” blogger was outed in 2009 by the *Times* of London. At the time the newspaper maintained that its controversial publication of a blogger’s real identity was based on brilliant detective work by a young staff journalist. However, it is now clear that the blogger’s identity was established by unethical and seemingly unlawful hacking of the blogger’s private email account. If the hack was not bad enough, the Leveson Inquiry has also heard how the newspaper in effect misled the High Court about it when the blogger sought an urgent injunction against his forced identification. The blogger lost that critical privacy case and it is possible that the case could have been decided differently if the *Times* had disclosed the hack to the court. The following is a narrative of what happened. It reveals a depressing sequence of failures at the “newspaper of record”. Most of the sources for this post are set out on the resource page at my Jack of Kent blog. ## Background: the police blogger who won the Orwell Prize NightJack was an outstanding blog and its author was one of the best the blogging medium had ever produced. The blog was an unflinchingly personal account of front-line police work set in the fictional — and generic — urban environments of “Smallville” and “Bigtown”. The world it described was very different from the glamorous police shows on television. Readers who otherwise would not know what police really did and what they had to put up with could now gain a proper understanding of the modern police officer’s lot. The blog’s narrator — “Jack Night” — could have been any police officer working under pressure in any town or city. NightJack was a perfect example of the value of blogging, providing a means — otherwise unavailable — by which an individual could inform and explain in the public interest. After he was outed, the author explained how the blog was started and how NightJack gained a good following: It all began around December 2007 when I began to read blogs for the first time. I read blogs by police officers from all over the UK. They were writing about the frustrations and the pleasures of what we all refer to as “The Job”. As I read, I began to leave comments until some of those comments were as long as the original posts. Reading and responding made me start to consider my personal feelings about “The Job”. So it was that in February 2008, I made a decision to start blogging for myself as NightJack. That decision has had consequences far beyond anything that I then imagined possible. My head-on accounts of investigating serious crime and posts on how I believed policing should work within society seemed to strike a chord and my readership slowly grew to around 1,500 a day. And then, a year after the blog started, something happened that made NightJack one of the best-known blogs in Britain. ## February to April 2009: NightJack and the Orwell Prize In February 2009, the blogger learned that his work had gained formal recognition: [U]nexpectedly, in February 2009 I was longlisted for the Orwell Prize. In March 2009 NightJack made it on to the shortlist. I realised that what had begun as a set of personal ruminations was achieving a life of its own. I cannot deny that I was happy with the recognition, but at the same time I had the feeling that the Orwell Prize was a big, serious, very public event. Win, lose or draw, my blog was about to move out of the relatively small world of the police blogosphere and get a dose of national attention. On 22 April 2009 NightJack became the first winner in the new blog category of the Orwell Prize, regarded as the leading prize for political writing in the United Kingdom. The judges were clearly impressed; they said of NightJack: Getting to grips with what makes an effective blog was intriguing — at their best, they offer a new place for politics and political conversation to happen. The insight into the everyday life of the police that Jack Night’s wonderful blog offered was — everybody felt — something which only a blog could deliver, and he delivered it brilliantly. It took you to the heart of what a policeman has to do — by the first blogpost you were hooked, and could not wait to click on to the next one. However, the winning blogger was keen to maintain his carefully protected anonymity. He arranged for the prize to be collected by a friend and for the £3,000 to be donated to a police charity. He later wrote of the attendant media interest: The morning after I won the award, there was a leader inthe Guardian and a full page inthe Sun. The readership went up to 60,000 a day (more people have read NightJack since I stopped writing it than ever read it whilst it was live). My email inbox had offers from newspapers, literary agents, publishers and people who wanted to talk about film rights and TV adaptations. There was a lot of attention heading towards my blog and I was nervous that somehow, despite my efforts to remain unknown, my identity would come out. As an anonymous blogger, I was just another policing Everyman but if it came out that I worked in Lancashire, I knew that some of my writing on government policy, partner agencies, the underclass and criminal justice would be embarrassing for the Constabulary. Also, as an anonymous police blogger I was shielded from any consequences of my actions, but without the protection of that anonymity there were clearly areas where I would have to answer for breaches in the expected standards of behaviour for police officers. During the next month I began to relax a little. It felt like everything was going to work out and my identity would stay secret. I contacted one of the literary agents and said that the blog was not for sale at any price and that I wouldn’t be trading on the Orwell Prize. There was press and TV attention but nobody seemed to want to publicise who was behind my blog. ## 17 to 27 May 2009 – the hacking of an email account Unfortunately, this happy situation would last for only a month. A staff journalist at the *Times *called Patrick Foster had become interested in NightJack. Foster covered the media rather than crime, but he was intrigued by this anonymous police blog that had won the Orwell Prize. As Foster later said: In the first instance, this was down to the natural journalistic instinct of trying to unmask someone who tries to keep their identity secret. But Foster was not to use conventional journalistic methods to unmask the blogger. On or about Sunday 17 May 2009, Foster decided to hack into the NightJack author’s Hotmail account. He did this, it would seem, by “forgetting” the password and guessing the answer to the subsequent security question. The *Times* did not sanction or commission the hack. From the details available in the email account, Foster was apparently able to identify the author of the blog, as well as obtain the blogger’s private mobile phone number and see correspondence between the blogger and a literary agent. This hacking exercise was undertaken on Foster’s own initiative and was similar to an exercise he had undertaken as a student journalist at Oxford. (The police originally treated this earlier hack as a potential breach of the Computer Misuse Act 1990 and referred it to the university authorities.) Thus, Foster was not a stranger to email hacking or to the applicable legislation, which does not have any public-interest defence. On Tuesday 19 May 2009, Foster contacted his line manager, Martin Barrow, the* Times*’s home news editor, about his discovery. First, Foster emailed Barrow: “Martin, sorry to bother you. Do you have fiveminutes to have a quick chat about a story — away from the desk, down here in the glass box, perhaps?” It appears Barrow then immediately referred Foster to Alastair Brett, the long-serving* Times* legal manager. ## 20 May 2009 – Foster and Brett have a meeting Foster emailed Brett the next day: “Hi Alastair, sorry to bother you. Do you have fiveminutes today? I need to run something past you.” They then had what proved to be a significant meeting. Two years later, Brett recalled the meeting for the Leveson Inquiry: I remember Patrick Foster coming to see me on or about 20 May 2009 about a story he was working on. He came into my office with Martin Barrow, the home news editor, who was his immediate line manager. Mr Barrow indicated that Mr Foster had a problem about a story he was working on. From my best recollection, Mr Barrow left shortly after that and Mr Foster and I were left alone. Mr Foster then asked if we could talk “off the record”, ie, confidentially, as he wanted to pick my brains on something and needed legal advice. I agreed. He then told me thathe had found out that the award-winning police blogger, known as NightJack, was in fact RichardHorton, a detective constable in the Lancashire Police, and that he had been using confidentialpolice information on his biog. As his activities were prima facie a breach of police regulations,Mr Foster felt there was a strong public interest in exposing the police officer and publishing hisidentity. When I asked how he had identified DC Horton, Mr Foster told me that he had managed to gain access to NighJack’s email account and as a result, he had learnt that the account was registered to an officer in the Lancashire Police, a DC Richard Horton. This immediately raised serious alarm bells with me and I told him that what he had done was totally unacceptable. At that first meeting, Mr Foster wanted to know if he had broken thelaw and if there was a public -interest defence on which he could rely. I had already done somework with Antony White, QC on the discrepancies between Section 32 and Section 55 of the Data Protection Act 1998 (DPA) and the government’s intention of bringing in prison sentences forbreaches of S55 of the DPA. I knew there was a public-interest defence under Section 55 of theDPA. I told Mr Foster that he might have a public-interest defence under the section but I wasunsure what other statutory provisions he might have breached by accessing someone’scomputer as I did not think it was a Ripa (Regulation of Investigatory Powers Act) situation. I said I would have to ring counsel to check there was a public-interest defence and what other statutory offences Mr Foster might have committed. I cannot now remember if I phoned OneBrick Court, libel chambers, while Mr Foster was in my office or shortly thereafter but I do know I spoke to junior counsel around this time and he confirmed that S55 of the DPA had a public–interest defence and it might be available. He did not mention anything about Section 1 of the Computer Misuse Act 1990 during that conversation or point me in that direction. I do remember being furious with Mr Foster. I told him he had put TNL and me into anincredibly difficult position. I said I would have to give careful consideration to whether or not Ireported the matter to David Chappell, the managing editor of the newspaper and the personon the newspaper who was responsible for issuing formal warnings to journalists and couldultimately hire or fire them. As Mr Barrow, the home news editor, had brought Mr Foster up tosee me, I assumed that he was also fully aware of Mr Foster having accessed NightJack’s emailaccount and that he, as Mr Foster’s immediate line manager, would take whatever disciplinaryaction he thought appropriate about a journalist in his newsroom. I also remember making it clear that the story was unpublishable from a legal perspective, if it was based on unlawfully obtained information. It was therefore “dead in the water” unless the same information — NightJack’s identity — could be obtained through information in the public domain. I told him he had been incredibly stupid. He apologised, promised not to do it again but did stress how he believed the story was in the public interest and how important it was to stop DC Horton using police information on his blog. He said he thought he could identifyNightJack using publicly available sources of information. I told him that even if he could identifyNightJack through totally legitimate means, he would still have to put the allegation to DCHorton before publication. This process is called “fronting up”, and is an essential element of theReynolds qualified privilege defence in libel actions. Alastair[Brett]on side. Foster then told Barrow: Am tryingto take it out of paper this Saturday for three reasons: (1) am away this Friday, (2) want a little more time toput ducks in a row and pix [photographs], (3) want little more space between the dirty deedand publishing. The “dirty deed” was presumably the unauthorised hacking of the victim’s email account, to which he had just admitted to the *Times* legal manager. ## 27 May 2009 – the blogger takes legal action On the morning of Wednesday 27 May 2009, a week after the meeting between Foster and Brett, Detective Constable Richard Horton of the Lancashire Constabulary was told by colleagues that the* Times *picture desk had been in contact asking for photographs. Then at around lunchtime, Horton received a call on his private mobile telephone number. The caller was Foster. Horton later wrote: Then one morning I heard a rumour thatthe Times had sent a photographer to my home.Later in the afternoon came the inevitable phone calls fromthe Times, first to me and then to Lancashire Constabulary, asking for confirmation that I was the author of the NightJack blog. That was easily the worst afternoon of my life. As Horton’s lawyer later told the High Court: [Horton] was approached by a journalist, Mr Pat Foster, claiming to be from the Times newspaper. Mr Foster told [Horton] that he had identified him as the author of the blog and was proposing to publish his identity as author of the blog together with a photograph of him in the next day’s edition of newspaper. [Horton] has no idea how Mr Foster identified him as the author of the blog. Foster later described the same call in a witness statement for the High Court: On May 27 I contacted Richard Horton by phone and put it to him that he was the author of the blog. He seemed agitated and would not confirm or deny the allegation. In the course of the conversation he admitted that he had had contact with journalists about the blog. Hesaid he was writing a book, but said it could be coincidence that the author of the blog had also written on the blog that they were writing a book. At the end of the conversation I was certain that he was the author of the blog. Horton was indeed the author of the NightJack blog, as Foster knew before he made the call. However, Horton was not going to simply accept his imminent “outing”. He contacted the Orwell Prize administrators about Foster’s call and they referred him to Dan Tench, an experienced media litigator at a City law firm. Tench promptly faxed the *Times* to warn in general terms that the publication of Horton as the blogger would be a breach of confidence and a wrongful disclosure of personal information. Horton’s legal challenge took Brett by surprise and it placed the* Times* in a difficult position. Brett had not thought the outing of Horton would lead to litigation. However, Tench was now demanding an undertaking that the *Times* would not publish the identity of Horton without giving 12 hours’ notice. The *Times* agreed. This meant Tench and Horton would now have to be told well before any publication, allowing them an opportunity to obtain an injunction to prevent publication. Accordingly, the newspaper did not out the blogger the next day as it had intended. So what public domain information did the *Times* have on 28 May, the intended publication date, which connected Horton with NightJack? It is difficult to be certain, as there is little direct evidence of any investigation taking place before 27 May 2009 though there had clearly been analysis of some of the posts. And, in his call to Horton, Foster seemed to mention the literary agent only as supporting evidence. This detail was presumably taken from the email account, as was the number he dialled. The newspaper appears to have had little more than the information Foster had been able to elicit from the Hotmail account or deduce from comparing some news reports with statements on the blog. ## 28 May 2009 – Horton applies for an injunction The morning after Foster’s call to Horton, Brett emailed Tench, giving the required “notice that the* Times* would be publishing a piece in tomorrow’s paper about your client being the Night Jack”. Tench replied at lunchtime to confirm that Horton would be seeking a temporary injunction at the High Court. An injunction hearing was hurriedly arranged for 4pm the same day before Mr Justice Teare. This was to be the first of two High Court hearings for this case. At the initial hearing, Horton’s legal team set out for the High Court the many detailed steps taken by Horton to protect his anonymity. Because of these steps, Horton’s lawyers contended that any identification by Foster could only have been in breach of confidentiality or an invasion of privacy. At the hearing of the application for the injunction, the barrister for the* Times* (who had not been made aware of the hack) was instructed to say that the identity had been worked out “largely” by detective work: My instructions, having discussed [the confidentiality] argument in particular with my instructing solicitors and the journalist, who is here, are that the proposed coverage that will be given, which will involve the disclosure of this individual’s identity, is derive …from a self-starting journalistic endeavour upon the granting of the Orwell Prize. It is a largely deductive exercise, in the sense that the blogs have been examined and contemporary newspaper reports have been examined. This first hearing was a relative success for Horton and his lawyers. It was adjourned to allow the* Times* to put in a skeleton argument and witness evidence at a resumed hearing the following week. In the meantime, the *Times* undertook not to publish its story. ## 29 to 31 May 2009 – the *Times* finds a “golden bullet” After the first hearing, there was frantic activity at the* Times* to establish that Horton’s identity could somehow be established by entirely public means. Unless this was possible, it was likely that the *Times* would lose at the resumed hearing. It was at this point, it seems, that Brett realised the *Times* did not actually have a copy of NightJack’s entire blog. Horton had taken the blog down after the call from Foster, and it appeared neither Foster nor Brett had thought ahead to retain a copy before that call was made. So, on Friday 29 May 2009, the day after the initial hearing, Brett asked Tench for a full copy of the NightJack blog: It is important we see a full copy of the blog in order to make a detailedanalysis before the hearing next week. Why was Brett requesting the blog at this stage? The implication is that the *Times* had yet to make a detailed analysis of the blog’s content. The *Times* was looking for any information which would allow it to show that Horton could be identified by information in the public domain. On Saturday 30 May 2009 there was a breakthrough. An excited Foster emailed Brett: Alastair, I cracked it. I can do the whole lotfrom purely publicly accessible information. Brett is delighted, and he replied the same day: Brilliant — that may be the golden bullet. Canyou set it out on paper? This “golden bullet” — discovered some ten days after Foster had first raised the case with Brett, and two days after the first High Court hearing and the original intended publication date — consisted of comments left by Horton on his US-based brother’s Facebook page. To obtain this crucial information, Foster had had to sign into Facebook as a member of the Houston Texas network, but he now had the final detail for the “fronting-up” exercise. This fortuitous discovery was made on 30 May 2009. But, of course, though the *Times* had originally intended to run the story on 28 May, two days before it obtained its “golden bullet”. ## Monday 1 June 2009 – Dan Tench writes an important letter By that same weekend, Tench was highly suspicious about the real source of the original identification by Foster. So, on Monday 1 June 2009, he wrote a detailed and substantial letter to Brett expressing concern that there had been unlawful interference with Horton’s email account. Tench set out a number of circumstances that gave rise to the strong likelihood that Foster had identified Horton by means of an email hack. He referred to the comment of the* Times*’s barrister at the first hearing the previous week, that the blogger had been identified only “largely” by a process of deduction. The password incident of 17 May 2009 was now mentioned, and cuttings were included of Foster’s previous hacking activity as a student at Oxford University. (Those cuttings happened to mention the offence under the Computer Misuse Act 1990, the existence of which was news to Brett.) Tench asked Brett directly to what extent Horton had been identified through a process of deduction. He also asked how Foster could have gained details of Horton’s home address, private phone number and literary agent. Tench even requested express confirmation from Brett that Foster did not at any time make any unauthorised access to any email owned by Horton. As he stated bluntly to Brett: It would be an extremely serious matter if Mr Foster had made an unauthorised access into any email account. ## Tuesday 2 June 2009 – the misleading letter and witness statement But Brett now had his “golden bullet”. When he replied to Tench on Tuesday 2 June 2009, Brett started by complaining about Tench not providing full disclosure of the requested NightJack blog: I think it would be fair to describe your client’s refusal to produce the full copy of the blog in this case asincompatible with that duty to revealall material that isreasonably necessary and likelyto assist the Times’s case and defence at the forthcoming hearing. Brett then proceeded to deal with Tench’s contentions about the likelihood of email-hacking: I am still working on Patrick Foster’s witness statement but apart from inserting all page numbers which is being done while I write this letter his witness statement is now almost ready to be served. I therefore attach a copy of it as it sets out through a process of elimination and intelligent deduction your client’s identity can be worked out [sic]. It is important that you read his witness statement as there cannot be any reason for your continuing to withhold the full blog from us when you have seen that process of deduction set out in the witness statement. As regards the suggestion that Mr Foster might have accessed your client’s email address because he has a “history of making unauthorised access into email accounts”, I regard this as a baseless allegation with the sole purpose of prejudicing Times Newspapers’s defence of this action . . . [. . .] As regards his deductive abilities, please see [Foster’s] witness statement. Brett’s explanation of how identification was obtained and his apparent assurance that the allegation of hacking was “baseless” were at best misleading. Brett himself has put it since that he was being “oblique to an extent which [was] embarrassing”. I resolved to try to uncover the identity of its author . . .I began to systematically run the details of the articles through Factiva, a database of newspaper articles . . .Because of the startling similarities between the blog post and the case detailed in the newspaper report, I began to work under the assumption that if the author was, as claimed, a detective . . .I tried to link personal details about the author that are revealed on the blog with real-life events . . .I began to examine the posts on the blog in chronological order to try and find personal information about the author . . .Having undertaken this process, it was clear that the author of the blog was DC Richard Horton . . . Towards the end of the witness statement, almost as if it was an afterthought, Foster then set out the comments on Horton’s brother’s Facebook page as mere “further confirmation” of the identification, rather than the “golden bullet” of his email exchange with Brett. Foster then signed his witness statement and everything was set for the resumed High Court hearing. ## 4 June 2009 – How the High Court was misled *Times*about the email hack. One (perhaps unintended) consequence of this was that the barristers could not help but effectively mislead the court through no fault of their own. The hearing thereby proceeded on the incorrect basis that Horton had been identified entirely by the detective work set out in the witness statement. On 4 June 2009 I heard an application in private whereby the claimant, who is the author of a blog known as “Night Jack”, sought an interim injunction to restrain Times Newspapers Ltd from publishing any information that would or might lead to his identification as the person responsible for that blog. An undertaking had been given on 28 May 2009 that such information would not be published pending the outcome. I indicated at the conclusion that I would refuse the injunction but, in the meantime, I granted temporary cover to restrain publication until the handing down of the judgment, when the matter could be considered afresh if need be. It was asserted in the claimant’s skeleton [argument] for the hearing of 28 May that his identity had been disclosed to the Times in breach of confidence. By the time the matter came before me, on the other hand, Mr Tomlinson was prepared to proceed on the basis that the evidence relied upon from Mr Patrick Foster, the relevant journalist, was correct; that is to say, that he had been able to arrive at the identification by a process of deduction and detective work, mainly using information available on the internet. [Horton’s barrister] needs to demonstrate that there would be a legally enforceable right to maintain anonymity, in the absence of a genuine breach of confidence, by suppressing the fruits of detective work such as that carried out by Mr Foster. ## 5-17 June 2009 – What the editor of the *Times* knew, and then what he does and does not do *Times*, know about any of this, and when? According to his later witness statement to the Leveson Inquiry, Harding came to know of the potential identification of the NightJack blogger on 27 May 2009, the day before the originally intended publication date. He also knew of the possible injunction application the same day, though he was not told of the hack. David, you asked me to do you a memo on NightJack and events to date. I first saw Patrick Foster on or about 19 May when he told me he’d been able to identify real live cases that an anonymous police blogger had been writing about. Patrick felt this was seriously off side and probably a breach of the officer’s duty of confidence to the force. He therefore wanted to identify the guy and publish his name in the public interest. He then said he had gained access to the blogger’s email account and got his name. This raised immediate alarm bells with me but I was unaware of the most recent law governing email accounts. After this conversation, I told Patrick: “Never ever think of doing what you have done again.” I said he might just have a public interest defence if anyone ever found out how stupid he’d been. He apologised and promised not to do it again. Further, he said he would set about establishing Horton’s identity without reference to the email account. I did though say he would have to put it to Richard Horton that he was NightJack. Last Thursday afternoon, our barrister told the court that through a process of deduction and elimination, Patrick could identify Horton as NightJack, but it looked as though we would lose the application because Horton’s silk was convincing the judge that he was entitled to have the information protected by the law of privacy and confidence. On Monday of this week, Olswang wrote to us saying: (a) that Patrick had a history of accessing email accounts and pointing us to an incident at Oxford where he’d been temporarily rusticated for accessing someone else’s email account without authority, and (b) that their client’s email had been hacked into. Looking at the old Oxford cuttings about Patrick’s brush with the proctors, I became aware of the possibility that Patrick’s access to Horton’s email account could constitute a breach of Section 1 of the Computer Misuse Act. Patrick has always believed that his investigation of NightJack was in the public interest. When he came to me to say that he had found out that NightJack was Richard Horton and he had also obtained access to his email account, I made it very clear that this was disastrous, as he should not have done it. Given my own failure to spot what could be a breach of Section 1 of the Computer Misuse Act, I am not in a position to advise sensibly in this case, but I would suggest that Patrick is given a formal warning that if he ever accesses anyone’s computer ever again without authority, whether it’s in the public interest or not, he will be sacked. You might add that the only reason he has not been sacked now is because he was told he might have a public interest defence if he was pursued under the [Data Protection Act]. At that time, it was not clear to Mr Chappell or to me exactly what Mr Foster had done, but the suggestion that he had accessed someone’s email account was a matter of great concern to both of us. [Foster] had said he had gained access to the blogger’s email account and got his name . . .. . . failure to spot what could be a breach of section 1 of the Computer Misuse Act . . . There are three things to consider:(1) What is the editorial value of this story?(2) Given there is a significant legal precedent in this, we’ll want to run something. Given the trouble it’s caused, are we now cutting off our own nose to spite our faces if we decide the story isn’t that interesting? Are we now stuck in a position of having to run something because of the legal processes?(3) What do we do about Patrick? If we publish a piece by Patrick saying how he pieced together the identity (for which Eady praises him!) what happens if subsequently it is shown that he had accessed the files?What are the ramifications for him, you and the editor — does our decision to publish, knowing that there had been a misdemeanour, indicate complicity and therefore real embarrassment or does Eady’s judgment get us off the hook? Discussion at that meeting focussed on whether publishing a story identifying Night Jack was in the public interest. We debated the arguments for and against.We also discussed whether in effect we had little option but to publish because the Times had pursued High Court action and the injunction had been lifted. In these circumstances, I decided to publish. Harding amplified this in oral evidence to the Leveson Inquiry: We had a meeting, as I remember, to discuss this issue. The first and biggest one was: what was the public interest argument? And of course, what was very frustrating was that’s exactly the conversation we should have had in advance of going to the High Court. We had it after the fact and after the fact that Mr Eady’s judgment was being handed down, but it was an important argument that we had to address, because on the one hand, some people said, “Why are we trying to identify someone who is essentially a citizen journalist who is an anonymous blogger? Surely, if you like, he’s one of us?” And on the other side there was a question which was: here is a police officer who appears to be in breach of his police duties and also there is a real question about this kind of commentary made anonymously on the internet — the whole issue of anonymity on the web. And, having listened to that debate, I took the view that this was — and [I] still believe that this was — firmly in the public interest. This was what dominated that conversation. The second issue was: what do we do about the fact that this case has been taken without our knowledge to the High Court? What do we do if we’ve taken up the time of the High Court? Mr Justice Eady has ruled that this is in the public interest; we are thereby enabling everyone to publish the identity of NightJack. But more importantly, will the Times not then get known for bringing vexatious lawsuits to the High Court if we don’t honour that judgment? Third, there was a question which was: the reporting had already led to Mr Horton’s identification within the Lancashire Constabulary, and fourth, we believed we had a behavioural problem with one of our reporters. We were going to have to address that. The way it had been presented to me — and that’s obviously different with hindsight — but the way it had been presented to me was there was a concern about Mr Foster’s behaviour but that he had identified him through entirely legitimate means. On that basis, and in the light of all of those four things, I took the decision to publish. However, as Harding later admitted: I can now see that we gave insufficient consideration to the fact of the unauthorised email access in deciding whether or not to publish. This “insufficient consideration” was notwithstanding the separate emails of Brett and Chappell, both emphasising the significance of the hack. Interestingly, at the same meeting on 15 June 2009, Harding instructed that disciplinary proceedings be launched against Foster for a “highly intrusive act”. So it would appear that Harding somehow regarded the hack as being very serious as an employment issue, but somehow not of particular weight as an editorial issue. Nonetheless, Harding later insisted at the Leveson Inquiry: If — if it had been the case that Mr Foster had brought this to me and said, “I’d like to get access to Mr Horton’s email account for the purposes of this story,” I would have said no. If Mr Brett had come to me and said, “Mr Foster has done this; can he continue to pursue the story?”, I would have said no. If Mr Brett had come to me and said, “Do you think we should go to the High Court, given the circumstances of this story?”, I would have said no. However, in my opinion, there was no good reason why Harding could have not said “no” at the editorial meeting of 15 June 2009 in light of the emails of Brett and Chappell, both emphasising the significance of the hack. Eady’s judgment was formally handed down the following day and the *Times* website exposed Richard Horton to the world as the author of NightJack. The story was also published in the print edition of 17 June 2009. It was one month to the day from when Horton’s email account had probably been hacked. ## 19 June 2009 to October 2011 — the immediate aftermath The outing of Richard Horton was controversial. To many observers, it seemed a needless and spiteful exercise by a mainstream media publication. The public interest arguments appeared hollow: no one else had been able to match information in the generic posts with any real-life cases. The supposed “advice” of the blog to those arrested was playfully ironic rather than subversive of policing. There just seemed no good purpose for the outing, and the public benefit of an outstanding and informative police blog had been pointlessly thrown away. Even other journalists were unimpressed. As Paul Waugh of the *London Evening Standard *wrote at the time: In NightJack’s case, I still cannot believe that the Times decided to embark on a disgraceful and pointless campaign to out him. Having found some clues about him, the paper inexplicably decided that this was some great issue of media freedom. The Times’s legal team then refused to back down rather than lose face. The damage that the Times inflicted was far worse than just threatening one honest copper with the loss of his career. It undermined any policeman who wanted to speak off the record, the lifeblood of decent crime reporting. It also undermined any whistleblowing blogger, any public servant who wanted to tell it as it is from the front line, without the filter of a dreaded “media and communications office”. Maybe one day the Times will apologise, but knowing newspaper office politics as I do, I suspect it never will. To which the *Times* columnist and leader writer Oliver Kamm replied, unaware of the true circumstances of what had happened: I’m stupefied at the way Waugh has depicted this. Be aware that when he says, “The Times’s legal team refused to back down,” what he means is thatthe Timesdecided to defend itself against a legal attempt to muzzle it. Its reporter had discovered the identity of the police blogger (Richard Horton), through public sources and not by subterfuge or any invasion of privacy. Horton sought to protect his anonymity, and in my opinion he had no plausible grounds for doing so other than his own convenience. Ifthe Timeshad pried into Horton’s family life (of which I have no knowledge whatever), then that would have been wrong. But it didn’t. Horton wrote his blog, expressing partial political opinions, using information gained from his employment as a public servant. I once worked in public service (at the Bank of England), and I consider there is an ethos of confidentiality and political neutrality that you do not breach. Of course it was in the public interest to disclose Horton’s identity when he left clues to it. I’m surprised that Waugh retails uncritically the complaint of the freemasonry of bloggers, who assume that the constraints that we journalists observe ought not to apply to them. Kamm added in another post: A Great Historical Question to Which the Answer is No (“Was NightJack hacked into too?”) [. . .] [A]s Mr Justice Eady remarked in court, Foster uncovered Horton’s identity “by a process of deduction and detective work, mainly using information on the internet”. We’re journalists: we do this sort of thing. The* Times* had not only hoodwinked Mr Justice Eady; it had now hoodwinked one of its own leader writers. The Waugh/Kamm exchange illustrates essentially the state in which the story of NightJack’s outing remained for over two years: lingering concerns and confident counter-assurances, depending on whether one thought the* Times* had done a good thing or not. In the immediate aftermath, Horton underwent disciplinary proceedings and received a written warning from Lancashire Police. He did not return to blogging. Foster also received a written warning for the hack. *Times*in July 2010 and Foster left in May 2011, both in circumstances unrelated to the NightJack incident. ## October to December 2011 – the Leveson questionnaire The team at the Leveson Inquiry sent out questionnaires to various senior figures in the mainstream media. Three of those asked to provide witness statements in response to these questionnaires were Harding, Simon Toms (recently appointed interim director of legal affairs at News International) and Tom Mockridge (Rebekah Wade’s replacement as chief executive officer of News International). Neither Toms nor Mockridge was in post in 2009, and so neither could know any more about the hack than what he was told for the purposes of replying to the Inquiry’s questionnaire. One question asked related to computer hacking. Because of the disciplinary proceedings against Foster, the NightJack hack could not be denied or ignored, and so somehow it had to be mentioned. Yet the witness statements — all signed on 14 October 2011 — seemed to play down the incident. Toms: QuestionExplain whether you, or the Times, the Sunday Times, the Sun or the News of the World (to the best of your knowledge) ever used or commissioned anyone who used “computer hacking” in order to source stories, or for any reason. AnswerI am not aware that any NI title has ever used or commissioned anyone who used “computer hacking” in order to source stories. I have been made aware of one instance on the Times in 2009 which I understand may have involved a journalist attempting to access information in this way. However, I also understand that this was an act of the journalist and was not authorised by TNL. As such, I understand it resulted in the journalist concerned being disciplined. Harding: The Times has never used or commissioned anyone who used computer hacking to source stories. There was an incident where the newsroom was concerned that a reporter had gained unauthorised access to an email account. When it was brought to my attention, the journalist faced disciplinary action. The reporter believed he was seeking to gain information in the public interest but we took the view he had fallen short of what was expected of a Times journalist. He was issued with a formal written warning for professional misconduct. Mockridge: Neither I nor, to the best of my knowledge, the Sunday Times or the Sun has ever used or commissioned anyone who used “computer hacking” in order to source stories or for any other reason. In relation to the Times, I am aware of an incident in 2009 where there was a suspicion that a reporter on the Times might have gained unauthorised access to a computer, although the reporter in question denied it. I understand that that person was given a formal written warning as a result and that they were subsequently dismissed following an unrelated incident. Mockridge had initially been given incorrect information about the hack and this was corrected by his second witness statement of December 2011: At paragraph 20.2 of my first witness statement I referred to a reporter at the Times who might have gained unauthorised access to a computer in 2009. At the date of my first witness statement, it was my understanding that the reporter in question had denied gaining such access. Following further enquiries, I now understand that the reporter in fact admitted the conduct during disciplinary proceedings, although he claimed that he was acting in the public interest. The journalist was disciplined as a result; he was later dismissed from the business for an unrelated matter. These four statements were not immediately revealing. For example, from these statements alone, one would not know that the incident even related to a published story, let alone one where there had been related privacy litigation. Perhaps the hope was that no one would notice or investigate further. ## January 2012 – How the story began to emerge The four Leveson witness statements were published on the Leveson website on or after 10 January 2012 — first the one by Toms, and then the others. The only mention in the media seemed to be a short report in the *Press Gazette* of 10 January 2012 that a *Times* journalist had been disciplined for computer hacking. I happened to see the *Press Gazette* story and because of the 2009 date of the incident, I immediately suspected it was about NightJack. I had blogged about the outing at the time and had long been concerned that the “dark arts” had somehow been engaged. When the other three witness statements were published, I pieced together what they did say over 16-17 January 2012 on the Jack of Kent blog. In essence, one could deduce from the witness statements the following apparent facts: - the incident was in 2009; - the reporter was male (“he”); - the computer-hacking was in the form of unauthorised access to an email account; - a disciplinary process was commenced after concerns from the newsroom (not entirely correct, as it turned out); - the reporter admitted the unauthorised access during the disciplinary process (also not correct, as it was admitted before publication, let alone the disciplinary process); - the incident was held to be “professional misconduct” and the reporter was disciplined; and - the reporter was no longer with the business, having been dismissed on an unrelated matter. On 17 January 2012, Harding gave evidence to Leveson Inquiry, but he was not asked about the computer-hacking incident referred to in his witness statement. Meanwhile both Paul Waugh and I connected the incident with NightJack, and late on 17 January 2012 David Leigh at the *Guardian* confirmed that* a Times* journalist had indeed hacked into the NightJack account. The next day at the *New Statesman* I drew attention to the worrying possibility that the *Times* may have therefore misled the High Court. It was the first time the possibility had been raised that the High Court had been misled. Then, on 19 January 2012, the *Times* itself admitted the computer-hacking incident was in respect of NightJack. Harding sent a letter about NightJack to the Leveson Inquiry (which was not revealed until 25 January 2012): As you will be aware, in my witness statement to the Leveson Inquiry I raised concerns that I had about an incident of computer-hacking at the Times. I was not asked about it when questioned on Tuesday but I felt it was important to address the issue raised by the publication of my statement with our readers. So I draw your attention to an article on page 11 of this morning’s paper which seeks to give a more detailed account of what happened. In June 2009 we published a story in what we strongly believed was the public interest. When the reporter informed his managers that in the course of his investigation he had, on his own initiative, sought unauthorised access to an email account, he was told that if wanted to pursue the story, he had to use legitimate means to do so identifying the person at the heart of the story, using his own sources and information publicly available on the internet. On that basis, we made the case in the High Court that the newspaper should be allowed to publish in the public interest. After the judge ruled that we could publish in the public interest, we did. We also addressed the concern that had emerged about the reporter’s conduct, namely that he had used a highly intrusive method to seek information without prior approval. He was formally disciplined and the incident has also informed our thinking in putting in place an effective audit trail to ensure that, in the future, we have a system to keep account of how we make sensitive decisions in the newsgathering process. This was an isolated incident and I have no knowledge of anything else like it. If the inquiry has any further questions about it, I would, of course, be happy to answer them. In the meantime both Tom Watson MP and I called for Harding to be recalled to the Leveson Inquiry to answer questions about how the High Court seemed to have been misled. I also blogged that the *Times* owed Horton an apology. ## February to March 2012 — the Leveson Inquiry questions Harding and Brett What had really happened about the NightJack hack now began to came out. Harding was recalled to the Leveson Inquiry and provided his account of what happened, which I have drawn on for the narrative above. He also apologised to Horton and this apology was mentioned on the front page of the newspaper. The same day, Horton was reported as launching legal action. The main thrust of Harding’s evidence at Leveson was to shift the blame on to Brett. But this did not seem entirely fair. In my opinion, once it became clear that what seemed to be a breach of the Computer Misuse Act had occurred, the editor of the *Times* could and should have found out more about what the court had been told. And, of course, it was Harding’s own decision to publish, even though he was aware that there had been a hack and had had an email from Brett explaining the hack’s legal significance. The Leveson Inquiry also summoned Brett. In an extraordinary and brutal examination, in which Lord Justice Leveson took a leading role, Brett’s conduct in the matter was placed under intense scrutiny: BRETT: [Foster] had to demonstrate to me and to certainly Horton and everybody else that he could do it legitimately from outside in, and that’s what he did. LORD JUSTICE LEVESON: But he couldn’t. How do you know he could? Because he’s choosing what facts he’s chasing up on. Of course it all looks beautiful in his statement, and I understand that, but because he knows what facts he’s looking for, he knows what bits he has to join together, he knows the attributes and characteristics of the person he has to search out, so he can search out for somebody with those corresponding characteristics. [. . .] BRETT: Mr Foster had by this stage done the exercise totally legitimately. LORD JUSTICE LEVESON: No, he hadn’t, with great respect. He’d used what he knew and found a way through to achieve the same result. Because he couldn’t put out of his mind that which he already knew. Lord Justice Leveson turned to Foster’s reference to “confidential sources” in his witness statement. LORD JUSTICE LEVESON: With great respect, it’s smoke, isn’t? There wasn’t a confidential source here at all. There was a hacking into an email. He may very well have talked to all sorts of people, but to say “I won’t reveal information about confidential sources” suggests he has confidential information from a source which he’s not going to talk about, for understandable reasons, but in fact it’s just not true. Brett was asked about his assurance that the allegations about Foster were “baseless”: BRETT: I don’t think I should have used the word “baseless”, with hindsight. And Lord Justice Leveson delivered the final blows: LORD JUSTICE LEVESON: Let’s just cease to be subjective, shall we. Let’s look at Mr Foster’s statement . . . To put the context of the statement in, he’s talking about the blog and he says that he decided that one or two things had to be true and that it was in the public interest to reveal it, so there he is wanting to find out who is responsible for NightJack . . . Would you agree that the inference from this statement is that this is how he went about doing it? BRETT: Yes, it certainly does suggest — LORD JUSTICE LEVESON: And then he starts at paragraph 12: “I began to systematically run the details of the articles in the series through Factiva, a database of newspaper articles collated from around the country. I could not find any real-life examples of the events featured in part one of the series.” That suggests that’s how he started and that’s how he’s gone about it, doesn’t it? BRETT: It certainly suggests he has done precisely that, yes. LORD JUSTICE LEVESON: And that’s how he’s gone about it? BRETT: Yes. LORD JUSTICE LEVESON: That’s not accurate, is it? [Pause] BRETT: It is not entirely accurate, no. LORD JUSTICE LEVESON: Paragraph 15. I’m sorry, Mr Jay, I’ve started now. Paragraph 15: “Because of the startling similarities between the blog post and the case detailed in the newspaper report, I began to work under the assumption” — “I began to work under the assumption” — “that if the author was, as claimed, a detective, they probably worked . . .” et cetera. Same question: that simply isn’t accurate, is it? BRETT: My Lord, we’re being fantastically precise. LORD JUSTICE LEVESON: Oh, I am being precise because this is a statement being submitted to a court, Mr Brett. BRETT: Yes. LORD JUSTICE LEVESON: Would you not want me to be precise? BRETT: No, of course I’d want you to be precise. It’s not the full story. LORD JUSTICE LEVESON: Paragraph 20. I repeat — I’m not enjoying this: “At this stage I felt sure that the blog was written by a real police officer.” That is actually misleading, isn’t it? BRETT: It certainly doesn’t give the full story. LORD JUSTICE LEVESON: Well, there are two or three other examples, but I’ve had enough. That was it; there was little more that needed to be said. It was, as lawyers would say, as plain as a pikestaff that the High Court had, in effect, been misled by the *Times*, just as it was now clear that the *Times* had outed the NightJack blogger though senior managers were aware at the time that his identity had been established using an unlawful email hack and that this was a seeming breach of the Computer Misuse Act 1990. A person’s privacy had been invaded; a criminal offence appeared to have been committed; the High Court had been effectively misled; senior managers had pointed out the legal significance of all this in contemporaneous emails; and the person’s anonymity was to be irretrievably destroyed. But the editor of the *Times* published the story anyway. *David Allen Green is legal correspondent of the New Statesman and author of Jack of Kent.* *Research assistance from Natalie Peck.* *This post is dedicated, with permission, to Richard Horton.*
true
true
true
The story of how, in a string of managerial and legal lapses, the Times hacked NightJack and effectively misled the High Court
2024-10-13 00:00:00
2012-04-12 00:00:00
https://secure.gravatar.com/avatar/?d=https://www.newstatesman.com/wp-content/uploads/sites/2/2023/02/Author-177x177.png&s=177?1728788617
article
newstatesman.com
New Statesman
null
null
17,827,917
http://www.foxnews.com/politics/2018/08/23/reality-winner-sentenced-to-more-than-5-years-over-classified-report-leak.html
Reality Winner sentenced to more than 5 years over classified report leak
Brooke Singman; Terace Garnier
Former National Security Agency contractor Reality Winner on Thursday was sentenced to more than five years in prison after pleading guilty to leaking a classified report with information on Russia’s involvement in the 2016 presidential election. Winner, 26, was sentenced to 63 months, with no fine in a Georgia courtroom. She received an additional three years of supervised release. The prisoner's mom had tears streaming down her face as the sentence was read. Winner appeared in court wearing an orange jumpsuit. Winner's defense team said they felt the sentence, reportedly the longest ever imposed for a federal media leak crime, was "fair." Winner, an Air Force veteran, pleaded guilty in June after being held in prison at the Lincoln County Jail near Augusta, Georgia. Winner was arrested in June 2017, and charged under the Espionage Act for removing classified material from a government facility and mailing it to a news outlet, according to the Justice Department. Winner’s 2017 arrest was announced shortly after the Intercept website published a story detailing how Russian hackers attacked at least one U.S. voting software supplier and sent so-called “spear-phishing” emails to more than 100 local election officials at the end of October or beginning of November 2016. The Justice Department did not specify that Winner was being charged in connection with the Intercept’s report. However, the site noted that the NSA report cited in its story was dated May 5, 2017. An affidavit supporting Winner’s arrest also said the report was dated “on or about” May 5, 2017. Winner worked as a contractor with a Top Secret security clearance with Pluribus International Corporation at a federal facility in Georgia when she printed out a sheet of paper with classified information and mailed it to a news outlet, according to the Justice Department. Winner had a colorful history on social media that laid bare her political leanings, and wanted to “resist” President Trump. At the time of her arrest in 2017, Winner’s social media pages indicate she was a passionate environmentalist who shared Bernie Sanders material online and held some anti-Trump views. She shared numerous articles and comments against the Dakota Access and Keystone XL pipelines (which Trump has moved to revive) on her Facebook page, even posting a letter she sent to the office of Sen. David Perdue, R-Ga. “Repeat after me: In the United States of America, in the year 2017, access to clean, fresh water is not a right, but a privilege based off of one’s socio-economic status,” Winner wrote in a Facebook posting last year. Winner also posted using the hashtag #F---ingWall, in an entry about Trump “silencing” the Environment Protection Agency. Winner also posted last February, before Trump revived construction on the Dakota Access Pipeline: “You have got to be s---ting me right now. No one has called? The White House shut down their phone lines. There have been protests for months, at both the drilling site and outside the White House. I’m losing my mind. If you voted for this piece of s---, explain this. He’s lying. He’s blatantly lying and the second largest supply of freshwater in the country is now at risk. #NoDAPL #NeverMyPresident #Resist.” And in one telling post before the 2016 general election, she wrote, "On a positive note, this Tuesday when we become the United States of the Russian Federation, Olympic lifting will be the national sport." Air Force officials confirmed that Winner served active duty from December of 2010 to December 2016. Winner was a cryptologic language analyst, requiring fluency in at least one foreign language which was not divulged. Winner attained the rank of senior airmen, E4, and was last stationed at Ft. Mead in Maryland. *Fox News' Samuel Chamberlain and Nicole Darrah contributed to this report.*
true
true
true
Former National Security Agency contractor Reality Winner on Thursday was sentenced to more than five years in prison after pleading guilty to leaking a classified report with information on Russia’s involvement in the 2016 presidential election.
2024-10-13 00:00:00
2018-08-23 00:00:00
http://media2.foxnews.com/BrightCove/694940094001/2018/08/23/694940094001_5826090962001_5826091747001-vs.jpg
article
foxnews.com
Fox News
null
null
9,308,890
http://unimersiv.com/blog_post.php?id=13&1
Virtual & Augmented Reality News - Unimersiv Blog
null
The Unimersiv app is available for free on the Rift, Gear VR, Daydream and Cardboard on Android. Click on your headset to be re-directed to the download page. We write about the use of Virtual Reality for non-gaming applications. One email/week. Apply Now To apply, email us [email protected] with: 1. Your resume/portfolio 2. Explain why you are interested in VR and Education 3. Use the title of the job post in the subject line.
true
true
true
We write about the latest in Virtual & Augmented Reality News. Feel free to browse our blog and sign-up for our newsletter.
2024-10-13 00:00:00
2019-01-01 00:00:00
null
article
unimersiv.com
Unimersiv
null
null
8,049,767
http://arstechnica.com/tech-policy/2014/07/new-york-state-proposes-sweeping-bitcoin-regulations-and-theyre-strict/
New York state proposes sweeping Bitcoin regulations—and they’re strict
Cyrus Farivar
The New York Department of Financial Services (NYDFS) has issued proposed regulations for Bitcoin and other related cryptocurrency businesses that operate in the Empire State. The most significant change is that anyone doing business with a firm operating under these rules won’t be pseudonymous, much less anonymous—in direct contradiction to one of the defining characteristics of Bitcoin. The new so-called BitLicense framework was published (PDF) for the first time on Thursday, and it includes numerous provisions for consumer protection, anti-money laundering, and other new rules to prevent fraud, abuse, and loss. The rules require that company founders and employees submit to fingerprint and background checks and that the companies retain 10 years of transaction records. “We have sought to strike an appropriate balance that helps protect consumers and root out illegal activity—without stifling beneficial innovation,” Benjamin M. Lawsky, superintendent of financial services, said in a statement. The NYDFS did not respond to further requests for comment. ## Setting the tone In an unusual step, Lawsky also posted the regulations to reddit for discussion, where redditors largely slammed the new rules. “We recognize that not everyone in the virtual currency community will be pleased about the prospect of a new regulatory framework,” he wrote. “Ultimately, though, we believe that setting up common sense rules of the road is vital to the long-term future of the virtual currency industry, as well as the safety and soundness of customer assets. (We think the situation at Mt. Gox, for example, made that very clear.) Moreover, given that states have specific regulatory responsibilities in this area, we also have a legal obligation to move forward on this framework.”
true
true
true
To avoid Mt. Gox situation, anti-money laundering and security measures abound.
2024-10-13 00:00:00
2014-07-17 00:00:00
https://cdn.arstechnica.…logo-512_480.png
article
arstechnica.com
Ars Technica
null
null
695,506
http://www.ddj.com/hpc-high-performance-computing/217701907
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,207,921
https://6826.csail.mit.edu/2020/papers/noproof.pdf
null
null
%PDF-1.7 %���� 1 0 obj <> endobj 2 0 obj <> endobj 3 0 obj <>/ExtGState<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S>> endobj 4 0 obj <> stream x��\[�㶱~ߪ�|�R3<x��65�؉�9�8�|�� ��1ї�l�l�BoL�>�F����|)��IO���������/�~�����a�Fi���&e�^��Ri뿾�=�Y�*�&�=*_��<�?�r�����V��B����(HS/�c���T�����������/��c�w}=\<�y��iV��_�{��U�UW�2P����q�qd�ѕ$D&�$�ArK��L�����>ᒏu�Ӯ`������/aD�S��Y�Ճ������>�,)J�O����_�Ǖx@Qacܕx�9MEd�>�P�-�oV����o��,8�*\��f}J�f�n5K�fIۤ�t�X�U�� -�w�j����SԾ-�*�A��Q���cZ�j�zB[���Ϩ�� ����m�-ڹ���Z�%m�[�?\�Ǔ�����R�#�oTwđ�~�w�711T�`��S'��yj]��8�W����m5�CW#3e�}��.n��]�3������.��F˰��փ��C0A�ռ9P�P��~�C��m���K:� Q2�� �FG~;�<��|�DQ����J��ݳ�]!�8�f��3*I 3��"�ghq,d����/G6*���ޞְaAJ�+(�4�H�J�>��}�G0���:��"���}2�c��]��n8# ��e��!��%��_� �>��S��MF��n����?�O�W�50 ,�$x�/�GN;��~�{)A��p�;�R|cx��|h���z mɁ�KQrRx��2� ����zb�E�Y�\3����X3V� �^�t�R�%~`�Q �>O�����*N��r>J��:ud���������&�Ã��拫!v��IK�TO�"��fF讞|��J���G�=$>�P�����A�U���I� �0��Ã���"A��?��j��;{����u������3�9�m � wr�����#��� F\he�~5������N�V�ϕ�~}�o6խ�$�@$7��O�Z�5UH�����#q�R�t����DM�;��/MߏF�ڡ�k�Es7�� ������'x��w�66���հ%� �����#?��@�<�v�����ft-�0 �[�.d�.�/I��^ ��w�]�-j)k��xĎ��Ոn�j4��ӊ%��l�+M�v����Y����O�G�l��q^�<�I��n��e�����(�I�;c��G�3xX� N\� ���U�O�l/̌��KV���WZ>��]
true
true
true
null
2024-10-13 00:00:00
null
null
null
null
null
null
null
6,176,466
http://techcrunch.com/2013/08/07/google-cloud-platform-adds-new-load-balancing-to-provide-more-scale-out-capability-and-control-to-developers/
Google Cloud Platform Adds Load Balancing To Provide More Scale Out Capability And Control To Developers | TechCrunch
Anthony Ha
Google has added new load balancing to Google Compute Engine, giving Google App Engine further scale-out capabilities. Google has also added new Ruby support for Datastore and improved PHP runtime. The new load balancing feature allows developers to route traffic across a collection of servers, do health checks, automatically handle spikes in data loads and configure the load balancer via command line interface (CLI) and a programmatic RESTful API. The new features are significant as they show the greater control that Google is giving to developers on the Google Cloud Platform, which is known for its high degree of management. Contrast that with Amazon Web Services (AWS), which gives users an open slate to build and manage their own stacks. The “Layer 3 Support” will be extended on a regular basis. The service is free through the end of the year. Google has also added Ruby support for Google Datastore. Developers can now spin out applications on the NoSQL datastore. It is similar to the initial release of Cloud Datastore that included code snippets and samples for getting up and running with Java, Python and Node. Google is also offering support for GQL, its SQL-like syntax for querying Google Datastore. The new updates additionally include more support for the PHP runtime, as well as: - Improved support for working with directories in Google Cloud Storage - The ability to write metadata to Cloud Storage files - Performance improvements through “memcache-backed optimistic read caching” — improving the performance of applications that need to read frequently from the same Cloud Storage file. Google App Engine demonstrates how data-driven companies now rule the day. Google is built on an infrastructure that makes it possible to code almost anything and rapidly create new digital products that are entirely service-related or even hardware and software integrations. Smartphones, netbooks, Internet TV — all are connected to the code and Google’s massive infrastructure.
true
true
true
Google Cloud Platform has added new load balancing, giving Google App Engine further scale-out capabilities. Google has also added new Ruby support for Datastore and improved PHP runtime.
2024-10-13 00:00:00
2013-08-07 00:00:00
https://techcrunch.com/w…engine.png?w=250
article
techcrunch.com
TechCrunch
null
null
1,909,680
http://www.alistapart.com/articles/understanding-css3-transitions/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,518,742
https://futurism.com/msn-ai-brandon-hunter-useless
Microsoft Publishes Garbled AI Article Calling Tragically Deceased NBA Player "Useless"
Victor Tangermann
Former NBA player Brandon Hunter passed away unexpectedly at the young age of 42 this week, a tragedy that rattled fans of his 2000s career with the Boston Celtics and Orlando Magic. But in an unhinged twist on what was otherwise a somber news story, Microsoft's *MSN* news portal published a garbled, seemingly AI-generated article that derided Hunter as "useless" in its headline. "Brandon Hunter useless at 42," read the article, which was quickly called out on social media. The rest of the brief report is even more incomprehensible, informing readers that Hunter "handed away" after achieving "vital success as a ahead [sic] for the Bobcats" and "performed in 67 video games." Condemnation for the disrespectful article was swift and forceful. "AI should not be writing obituaries," posted one reader. "Pay your damn writers *MSN*." "The most dystopian part of this is that AI which replaces us will be as obtuse and stupid as this translation," wrote a redditor, "but for the money men, it's enough." It's not the first time Microsoft — a major backer of ChatGPT maker OpenAI — has embarrassed itself with AI-generated content on *MSN*. It made headlines last month, for instance, after publishing a similarly incoherent AI-generated travel guide for Ottawa, Canada that bizarrely recommended that tourists visit a local food bank. It deleted the bizarre article after criticism. "The article was not published by an unsupervised AI," Jeff Jones, a senior director at Microsoft, claimed to *The Verge* at the time. "In this case, the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system." The full story is that back in 2020, *MSN *fired the team of human journalists responsible for vetting content published on its platform. As a result, as we reported last year, the platform ended up syndicating large numbers of sloppy articles about topics as dubious Bigfoot and mermaids, which it deleted after we pointed them out. You might expect that these repeated self-inflicted embarrassments would lead *MSN* to increase its scrutiny of content shared with its vast audience. "We are working to ensure this type of content isn’t posted in future," Jones told *The Verge *last month. They don't seem to be succeeding, though. *MSN* promises on its "About Us" page that it ensures the "content we show aligns with our values" through "human oversight." But looking at some of the material being published on its site, that claim strains credibility. Take the original publisher of the piece on Hunter's death, a publication going by the name of *Race Track.* Red flags abound, starting with the fact that its articles are bylined simply by an anonymous "Editor." The publication claims to distill the "essence of sports excellence" by being "your premier destination for all major sports news" — and though it links to a Portuguese-language automotive magazine called *Autogear* in its *MSN* profile, that site's "About Us" page is entirely filled with Lorem ipsum text, placeholder verbiage commonly used by web designers. Over the last 12 hours, the website has seemingly been taken down and presents visitors with a login page. And despite having almost 100,000 followers on Facebook, the site's content gets almost zero engagement there. Most obviously, a quick perusal of *Race Track*'s profile shows that it has been using *MSN* to publish an uninterrupted stream of incoherent gobbledygook. One particularly ridiculous article profiles a "Corridor of Fame" football player called "Pleasure Taylor," which appears to be a mangled reference to NFL Hall of Famer Joy Taylor. Another unintelligible recent piece slapped together by *Race Track* and republished by *MSN* bungled the story of Kevin Porter Jr's arrest for domestic violence, misstating facts as basic as the name of NYU Langone Medical Center, which it referred to as "Langone Medical Heart." Upon closer examination, the articles aren't just of abysmally low quality. As it turns out, they're also plagiarized. Take the article about Hunter's death, which follows the same structure as a *TMZ Sports* story about his death, albeit with altered punctuation and a use of synonyms so liberal that the result is essentially incomprehensible. Here's the first line of *TMZ*'s write-up: *Former Boston Celtics and Orlando Magic player Brandon Hunter has died, Ohio men's basketball coach Jeff Boals said Tuesday. He was just 42 years old.* *MSN*'s version, which clearly performed a series of clunky rephrasings like changing "player" to "participant" to disguise the pilfering: *Former NBA participant Brandon Hunter, who beforehand performed for the Boston Celtics and Orlando Magic, has handed away on the age of 42, as introduced by Ohio males’s basketball coach Jeff Boals on Tuesday.* *TMZ*'s story: *Hunter -- a standout high school hoops player in Cincinnati -- was a star forward for the Bobcats, earning three first-team All-MAC conference selections and leading the NCAA in rebounding his senior season ... before being taken with the 56th overall pick in the 2003 NBA Draft.* *He played 67 games over two seasons in the Association ... scoring a career-high 17 points against the Milwaukee Bucks in 2004.* On review, the version published by *MSN *is obviously a chopped up remix: *Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.* *He earned three first-team All-MAC convention alternatives and led the NCAA in rebounding throughout his senior season.** Hunter’s expertise led to his choice because the 56th general decide within the 2003 NBA Draft.* Everywhere we looked, other *Race Track *articles on *MSN *are clearly ripped off from other publishers. The "Pleasure Taylor" item is evidently a mangled version of a blog by *The Cold Wire*. A story about potholes in the United Kingdom is a butchered version of a piece in *Autocar*. And a post about tennis star Novak Djokovic is lifted from *Tennis World*. After this story ran, MSN deleted the articles in question. Initially it continued publishing new articles by *Race Track*, but later all posts on the publication's *MSN* page disappeared as well. Needless to say, none of this bodes well for the information ecosystem. With publications eagerly looking to replace human editors and writers, AI has unleashed a barrage of dubiously sourced content — sometimes by mainstream news sites ranging from *CNET* to *The AV Club *— that threatens to further erode public trust in the media. Accusing an NBA legend of being "useless" the week he died isn't just an offensive slip-up by a seemingly unsupervised algorithm, in other words. It's also a threat looming over the future of journalism. *Updated with comment from Microsoft.* **More on AI journalism: ***Google Unveils Plan to Demolish the Journalism Industry Using AI* Share This Article
true
true
true
Microsoft's MSN news portal published a garbled, seemingly AI-generated article that derided Hunter as "useless" in its headline.
2024-10-13 00:00:00
2023-09-14 00:00:00
https://wordpress-assets…ter-useless2.jpg
article
futurism.com
Futurism
null
null
36,983,704
https://bkhome.org/news/202112/why-iso-was-retired.html
The ISO file is a "wrong fit" for a USB-stick
null
### Why ISO was retired Some time ago I stopped releasing EasyOS as an ISO file, from then onward as a drive image file only. This has been contentious, and I receive emails from people lamenting the demise of the ISO. So, I should post some thoughts why I made this decision. Not an exhaustive rationale, just some thoughts while I think of them right now... The ISO9660 file format is very old, going right back to 1988, and has since then had enhancements bolted on, see the Wikipedia ISO9660 page. In addition, there is the "hybrid ISO", enabling booting from a USB-stick, and on top of that enhancements to enable booting from either or both legacy-BIOS and UEFI firmware computers, see here. What all of the above means is that ISO files are a "dogs breakfast", a hodge podge of changes bolted on over the years. A Linux distribution provided on a drive image file, in comparison, is very simple. And, very simple to setup to boot on either or both legacy-BIOS and UEFI computers. Given that optical drives are rapidly receeding into history, and these days we boot from either a USB-stick or install direct to a hard drive partition, what are the differences in doing that, ISO versus image file? In the case of booting from USB-stick, the answer is: **none** You need a tool for writing the file, ISO or image-file, to the USB-stick, and plenty of such tools are available. For example, 'Etcher' available on Windows, Mac and Linux, 'easydd' on Linux, or on Linux you can even use the 'dd' utility. As you are no longer writing to an optical media, you don't use a CD/DVD burner tool. That is the only change that you have to make. Having written the file to the USB-stick, ISO or drive-image, you need to configure the PC to boot from it, and you are in business. So, why have I received so many requests, via email and forum, to bring back an ISO file, and expressing opposition to the drive-image-file? That is an very interesting question, that puzzled me for a long time. OK, some more thoughts... ## The ISO file is a "wrong fit" for a USB-stick The iso file is a complete self-contained package. Let's say that you write a 550MB hybrid-ISO file to a 16GB USB-stick. This is what will be on the USB-stick: ISO | --unusable-- | Why is creating a USB-stick that is mostly unusable, considered to be acceptable by most mainline Linux distributions? Simply because they only want to be able to boot the distribution for the purpose of installing it to a internal hard drive. So, they don't care that the rest of the USB-stick is unusable. On the other hand, a live-CD type of distribution, like Puppy Linux, it is an issue, because the "save file" can only be created on some other drive, usually an internal drive. Wouldn't it be nice if the "save file" or "save folder" could be on the USB-stick, or rather it would be nice if you had that choice. With a drive-image, the entire USB-stick is available. The EasyOS drive image has two partitions, a vfat "boot partition" and a ext4 "working partition". Initially, the working-partition is only 640MiB or 816MiB, but at first bootup it automatically expands to fill the drive. So, this is what you have on the 16GB USB-stick: boot | --working-partition, using entire drive-- | ...think about the implication of that for a moment. Unlike traditional Puppy, you don't have to try and decide where to create a "save file". It already exists, as a "save folder" in the working-partition. **First bootup and you are in business, nothing else to do**. And, you never again have to worry about the size of your "save file". At first bootup you have automatically got the entire drive to play in. This is momentous, and thanks to Dima (forum member 'dimkr'), our most prolific woof-CE developer, the next-generation Puppy is being offered as a drive-image. ## Why so much opposition to dropping the ISO file? I receive regular messages expressing opposition to dropping ISO files, mostly by email. What I have observed is that the messages are from Puppy old-timers. They have a collection of vintage PCs, all with optical drives. Optical media, CD/DVD, ISO files, that's what they know. Yes, I can understand, if you know something very well, have done it that way for years, there is resistance to change, even if there are compelling arguments to do so. About a year ago, I received an email from a representative of a German Linux magazine. I think that magazine still ships printed magazines with a CD stuck on the front. Anyway, he wanted to review EasyOS and had some questions, and I replied, yeah, go for it. But, I became increasingly puzzled by his questions. They didn't make any sense. The penny finally dropped when I realised that he didn't have a clue what a Linux distribution on a drive-image file is. His knowledge was ISOs, ISOs and ISOs. ...that taught me that even Linux experts, people who review Linux distributions, may have severe misunderstandings outside of their beloved ISO format. Which also taught me that resistance to the drive-image format may be due to not understanding it. ## In conclusion As mentioned above, the mainline distributions may stick with ISO format just because they have no compelling reason to change. They just want to boot the CD/DVD and then install their distribution to an internal drive. Other than that use-case, ISO has had it's day, and should be retired. Oh yes, there are some old computers that won't boot from USB, well they are ancient and approaching relegation to "Silicon Heaven" (you need to be a fan of the Red Dwarf TV series to know what Silicon Heaven is). There are some multi-boot tools, that enable putting many ISO files on the one USB-stick; however, the ISO format does not have any intrinsic avantage, these boot managers could also be made to boot image files. I cannot think of a single other use-case where you would want to stay with ISO files. For Linux developers, if you are interested, I have a script for creating a skeleton drive-image file, with a boot-partition and a working-partition, that will boot on either a legacy-BIOS or a modern UEFI PC. There are three scripts, '2createpcskeletonimage', '2createpcskeletonimage-encrypt' and '2createpcskeletonimage-gpt' -- for easyOS I currently use the middle one, which creates a MSDOS partition table and enables ext4 fscrypt in the working-partition. Syslinux is used for legacy-BIOS booting, rEFInd for UEFI booting. These scripts are in the woofQ tarball, available here. Dima has a similar script. He is using Syslinux and Efilinux. What motivated me to create this blog post, is discussion starting here on the Puppy Forum: https://forum.puppylinux.com/viewtopic.php?p=43658#p43658 **EDIT 2021-12-11:** The above link is to Dima's "Vanilla Dpup" next-generation Puppy thread, and ongoing discussion about the merits or demerits of ISO format is hijacking the thread, so I have started another thread: https://forum.puppylinux.com/viewtopic.php?t=4690 There have been a few posts about using a multi-boot tool such as Ventoy, how convenient to just drop an ISO file onto the Ventoy USB stick, and add it to the boot-list. However, what needs to be pointed out is that the ISO file has no intrinsic advantage over an image-file. The boot manager can be made to treat an image file as a package, just like an ISO file, and boot it. Quoting from the Ventoy front page: Directly boot from ISO/WIM/IMG/VHD(x)/EFI files,no extraction needed And I see that EasyOS is ticked: ...snapshot is from here. Numbering on the left column is Distrowatch ranking. I received one email that EasyOS is more difficult to boot in a virtual machine, such as qemu. Again, there is no intrinsic reason why the drive image file should be any more difficult than with an ISO. I have received messages from people who have run EasyOS in a virtual machine, but as it hasn't interested me personally, I never got into documenting how to do it. Good, news, I see that Dima has posted how to run his next-generation Puppy image-file with qemu: https://forum.puppylinux.com/viewtopic.php?p=43900#p43900 So, I am repeating, there are no use-cases where the ISO format has an advantage over a Linux distribution as a drive image file. The perceived advantages are only due to ignorance. One more thing, while I think of it. A couple of people have commented that it is more difficult to extract the contents of an image file, vmlinuz, etc., if you want to do a direct frugal install to a internal hard drive. WRONG, WRONG, WRONG! In EasyOS, you just click on the image file and it opens up and you can copy out the files. You do not have to write it to a USB-stick or try to figure out how to mount the partition inside the image file. In the pups, you have single-click opening of ISO and SFS files. EasyOS has added that for image files. It is just an implementation detail, easy enough to add to the pups. **EDIT 2021-12-15:** There is now a "Why ISO was retired part-2": https://bkhome.org/news/202112/why-iso-was-retired-part-2.html **EDIT 2022-11-14:** With reference to the above statement "Syslinux is used for legacy-BIOS booting, rEFInd for UEFI booting", EasyOS has moved to the Limine boot-loader, for both legacy-BIOS and UEFI booting. If you want to know more how to "open up" a drive-image file and extract the contents, as stated above it is achieved just by clicking on the file. However, for a different Linux distribution, it can be done manually. Scan down near the bottom of this page to see how: https://easyos.org/user/how-to-update-easyos.html Tags: easy
true
true
true
null
2024-10-13 00:00:00
2021-12-08 00:00:00
null
null
null
null
null
null
7,767,580
http://blog.zach.st/2014/05/puppet-on-solaris-112.html?utm_content=news123&utm_medium=social&utm_source=news.ycombinator.com&utm_campaign=news
Redirecting...
null
null
true
true
false
null
2024-10-13 00:00:00
2014-05-18 00:00:00
null
null
null
null
null
null
14,922,327
https://blog.apimatic.io/why-your-api-needs-machine-readable-description-832e805f6855
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,566,671
https://en.wikipedia.org/wiki/SpaceX_Starship_integrated_flight_test_4
Starship flight test 4 - Wikipedia
null
# Starship flight test 4 Mission type | Suborbital flight test[1] | ---|---| Operator | SpaceX | Mission duration | 1 hour, 6 minutes, 10 seconds | Apogee | 213 km (132 mi)[2] | Spacecraft properties | | Spacecraft | Starship Ship 29 | Spacecraft type | Starship | Manufacturer | SpaceX | Start of mission | | Launch date | June 6, 2024, 12:50:00 UTC (7:50 am CDT)[3] | Rocket | Super Heavy (B11) | Launch site | Starbase, OLM-A | End of mission | | Landing date | | Mission patch | **Starship flight test 4** was the fourth flight test of the SpaceX Starship launch vehicle. The prototype vehicles flown were the Starship Ship 29 upper-stage and Super Heavy Booster 11.[4][5] SpaceX performed the flight test on June 6, 2024. The main test objectives of this flight, both of which were accomplished, were for the Super Heavy booster to simulate a landing at a "virtual tower" just above the surface of the Gulf of Mexico, and for Starship to survive at least peak heating during atmospheric re-entry.[6] This marked the first integrated test flight where both Starship and Super Heavy successfully reentered and performed a powered vertical landing over the ocean surface. ## Background [edit]### Investigation prior to launch [edit]Starship flight test 3 in March 2024 attained full duration burns of both stages and reached orbital velocity. However, both stages were destroyed during atmospheric return, prompting a SpaceX-led mishap investigation overseen by the FAA. The FAA stated that a completed license modification, incorporating corrective actions and meeting other requirements, was required for a launch license to be granted for this flight, the fourth flight test.[7][8] SpaceX stated in early April that it would intend to attempt a booster landing with the tower arms on the fifth flight test if the booster virtual landing is successful during the fourth flight test.[9] In late April, a NASA official confirmed SpaceX remained on track for the fourth test flight to occur in May 2024.[10] The communications license necessary for Flight 4 was granted by the FCC on April 18.[11] On May 17, SpaceX asked that the FAA make a public safety determination regarding the third flight test, which would allow SpaceX to launch the test flight while the mishap investigation is in progress if determined there was no public safety danger.[12] The FAA concluded the investigation on May 28 and determined that the third flight test had not threatened public safety.[13][14] SpaceX received regulatory approval to launch from the FAA on June 4.[15] Starship flight test 4 was initially scheduled to launch on June 5, but was pushed back a day to June 6.[16] For this fourth flight test, the FAA listed three specific outcomes that would not trigger a mishap-investigation: the ship burning up during reentry, the flaps not having sufficient control of the ship, or the Raptor 2 engines failing to relight for landing.[17] ### Vehicle ground testing [edit]Booster 11 and Ship 29 were first spotted around August 2022. Both stages underwent multiple cryogenic proof tests in late 2023, with Ship 29 performing a spin prime test in March 2024.[18] Following Starship's third test flight, Ship 29 was lifted onto Suborbital Pad B for two static fire tests in late March, and was later returned to the High Bay for pre-flight modifications. A 33-engine static-fire was conducted on Booster 11 on Orbital Launch Mount A on April 5. Booster 11's hot-staging ring was installed in early May.[19] Ship 29 was lifted onto Booster 11 on May 15,[20] followed by a partial propellant load test on May 16.[21] A wet dress rehearsal (WDR) was conducted on May 20.[22] On May 28, SpaceX performed a second wet dress rehearsal of S29 and B11,[23] and on May 30, SpaceX installed the flight termination system (FTS or AFSS) on B11 and S29.[24] On June 5, S29 was stacked on top of B11 for the fourth and final time.[18] SpaceX intentionally omitted two TPS (Thermal Protection System) tiles and replaced one with a thinner tile to test how the loss of tiles would affect the ship.[25] ### Changes from the previous flight [edit]During Starship's third test flight, the booster was destroyed just before splashdown due to engine failures caused by filter blockage of liquid oxygen to the engines. The ship was destroyed during reentry, due to excessive roll rates caused by clogged roll control valves. As a result, modifications were made to Booster 11's oxygen tanks to improve propellant filtration capability, while hardware and software changes were implemented to improve Raptor startup reliability. Additional roll control thrusters were added to the ship to improve attitude control redundancy.[26][27] Several changes were spotted on Ship 29, including updates to the TPS tile adhesive and layout. B11 received upgrades such as reinforcements of tanks and additions to improve rigidity and durability.[28] The largest horizontal tanks in the orbital tank farm were made operational, supplementing the older vertical tanks that were being retired. Suborbital Pad B was decommissioned in May 2024, and vehicle testing operations were moved to Massey's Test Site to make room for the construction of Orbital Launch Mount B.[29] ## Flight [edit]The mission profile for Starship flight test 4 was very similar to that of the third flight test, with the propellant transfer demonstration, the payload bay door demonstration, and the Raptor engine relight demonstration being omitted. There was also the addition of the jettisoning of the Super Heavy's hot staging ring two seconds after the shutdown of the boostback burn, and Starship was to attempt a landing flip and landing burn.[30][31] One of the 33 Raptor engines on Booster 11 failed to stay lit during the initial burn, and one of the thirteen used for the landing burn failed to light. Neither engine failure affected the outcome of the flight because of redundancy in the multiple-engine design. To reduce mass during descent, a temporary design change on this test flight was used to jettison the booster hot-staging ring.[32][ non-primary source needed] Longer term, the hot-staging ring is intended to be redesigned for lighter weight and tight integration with the booster and will not be jettisoned. B11 successfully conducted a powered vertical landing over the Gulf of Mexico, splashing down into the ocean.[33] The booster was destroyed after tipping over, and part of the engine section was recovered in September 2024.[34] Bill Gerstenmaier stated that the booster landed "with half a centimeter accuracy."[35] After completing the engine burn to an orbital energy trajectory, Ship 29 successfully re-entered the atmosphere, maintaining attitude control despite significant visible damage to the structure and loss of some number of heat shield tiles. Following the hypersonic velocity descent through the atmosphere, S29 performed a powered vertical landing above the ocean before splashing into the Indian Ocean.[36] Elon Musk said that the ship maintained subsonic control but landed approximately 6 kilometers (3.7 mi) away from the target splashdown location.[37] Time | Event | June 6, 2024 | ---|---|---| −01:15:00 | SpaceX Flight Director conducts a poll and verifies go for propellant loading | Success | −00:49:00 | Starship fuel loading (liquid methane) underway | Success | −00:47:00 | Starship oxidizer loading (liquid oxygen) underway | Success | −00:40:00 | Super Heavy fuel loading (liquid methane) underway | Success | −00:37:00 | Super Heavy oxidizer loading (liquid oxygen) underway | Success | −00:19:40 | Booster engine chill | Success | −00:03:30 | Booster propellant load complete | Success | −00:02:50 | Ship propellant load complete | Success | −00:00:30 | SpaceX flight director verifies GO for launch | Success | −00:00:10 | Flame deflector activation | Success | −00:00:03 | Booster engine ignition | All 33 engines ignited with 1 shutting down at T+00:00:03 | 00:00:02 | Liftoff | Success | 00:01:02 | Max q during ascent (moment of peak mechanical stress on the rocket) | Success | 00:02:46 | Booster most engines cutoff (MECO) | Success | 00:02:51 | Starship engine ignition and stage separation (hot-staging) | Success | 00:02:57 | Booster boostback burn startup | Success | 00:03:47 | Booster boostback burn shutdown | Success | 00:04:04 | Hot-stage jettison | Success | 00:07:04 | Booster is transonic | Success | 00:07:09 | Booster landing burn startup | 12 of 13 engines ignited[38] | 00:07:30 | Booster landing burn shutdown and splashdown | Success | 00:08:37 | Starship engine cutoff (SECO) | Success | 00:44:54 | Starship entry | Vehicle damaged on re-entry | 01:00:50 | Estimated time of max q during Starship's descent | Success | 01:03:17 | Starship is transonic | Success | 01:03:38 | Starship is subsonic | Success | 01:05:36 | Starship landing flip | Success | 01:05:39 | Starship landing burn | Success | 01:05:56 | Starship splashdown | Within the target area, but 6 km (3.7 mi) off center | ## Reactions [edit]The flight was hailed as a success and marked the first time the Super Heavy booster and Ship achieved controlled splashdowns. A FAA clause for Flight 4, which would allow SpaceX to continue with additional flights of the same profile without a mishap investigation as long as no public safety issues occurred, was upheld as the flight did not encounter a mishap outside of the three exceptions.[39][40] On June 12, the FAA announced that they would not be requiring a mishap investigation for Flight 4 because all flight events occurred within the scope of planned and authorized activities.[41] This was the first Starship flight test to not require an investigation. ## See also [edit]## References [edit]- ^ **a**"STARSHIP'S FOURTH FLIGHT TEST".**b***SpaceX*. May 24, 2024. Archived from the original on June 1, 2024. Retrieved May 24, 2024. **^**Scott Manley (June 6, 2024).*SpaceX's Starship Literally Melted! But It Kept Flying To A Miraculous Landing!*. Archived from the original on June 7, 2024. Retrieved June 6, 2024 – via YouTube.- ^ **a****b**"STARSHIP'S FOURTH FLIGHT TEST".**c***SpaceX*. June 6, 2024. Archived from the original on June 1, 2024. Retrieved June 6, 2024. **^**"SpaceX Revving Up for Starship Flight 3: | Starbase Update". NASASpaceFlight. January 29, 2024. Archived from the original on January 29, 2024. Retrieved February 13, 2024.**^**Bergin, Chris [@NASASpaceflight] (March 7, 2024). "We are live with testing of Ship 29, which is the upper stage of the fourth Starship Flight Test" (Tweet). Retrieved May 11, 2024 – via Twitter.**^**Davenport, Justin (April 19, 2024). "As IFT-4 prepares for launch, Starship's future is coming into focus".*NASASpaceFlight.com*. Archived from the original on May 6, 2024. Retrieved May 3, 2024.**^**"FAA Statements on Aviation Accidents and Incidents".*FAA*. March 14, 2024. March 14, 2024, Commercial Space / Boca Chica, Texas. Archived from the original on May 3, 2024. Retrieved May 5, 2024.**^**"Marcia Smith on X: "At media bfg at Space Symp now, FAA/AST's..."".*X*. April 10, 2024. Archived from the original on May 7, 2024. Retrieved May 5, 2024.**^**Bergin, Chris (April 6, 2024). "Some interesting notes".*X (formerly Twitter)*. Archived from the original on April 6, 2024. Retrieved April 6, 2024.**^**Beil, Adrian (April 28, 2024). "NASA Updates on Starship Refueling, as SpaceX Prepares Flight 4 of Starship".*NASASpaceFlight.com*. Archived from the original on April 30, 2024. Retrieved May 3, 2024.**^**"License granted: Space Exploration Technologies Corp. (SpaceX) Dates: 04/25/2024-10/25/2024 Purpose: Launch vehicle communications for test flight mission launching from Starbase, TX". Archived from the original on June 7, 2024. Retrieved May 5, 2024.**^**Beil, Adrian (May 17, 2024). "Statement of FAA provided to @NASASpaceflight about SpaceX led investigation".*X (formerly Twitter)*. Archived from the original on June 7, 2024. Retrieved May 17, 2024.**^**Bell, Adrian (May 30, 2024). "As SpaceX Completes Second Starship WDR, FAA Closes Safety Investigation Into Flight 3".*NASASpaceflight*. Archived from the original on May 30, 2024. Retrieved May 30, 2024.**^**Beil, Adrian (May 28, 2024). "Statement by the FAA provided to @NASASpaceflight".*X (formerly Twitter)*. Archived from the original on May 29, 2024. Retrieved May 28, 2024.**^**"VOL 23_129 SpaceX Starship Super Heavy rev 3.pdf".*drs.faa.gov*. Archived from the original on June 4, 2024. Retrieved June 4, 2024.**^**Wall, Mike (June 3, 2024). "SpaceX targeting June 6 for next launch of Starship megarocket".*Space.com*. Archived from the original on June 3, 2024. Retrieved June 4, 2024.**^**Clark, Stephen (June 5, 2024). "We know Starship can fly—now it's time to see if it can come back to Earth".*Ars Technica*. Archived from the original on June 5, 2024. Retrieved June 5, 2024.- ^ **a**"Speeding on to Flight 4: The Chronology of S29 & B11".**b***Ringwatchers*. June 9, 2024. Retrieved June 16, 2024. **^**Weber, Ryan (May 5, 2024). "Ship 30 set to Static Fire next week as Flight 4 Preparations Continue".*NASASpaceFlight.com*. Archived from the original on May 7, 2024. Retrieved May 7, 2024.**^**NASASpaceflight (May 15, 2024).*Fullstack: SpaceX Stacks Ship 29 on Booster 11*. Archived from the original on May 15, 2024. Retrieved May 15, 2024 – via YouTube.**^***SpaceX Tests the Full Stack of the Fourth Starship Flight Test*. Archived from the original on May 20, 2024. Retrieved May 16, 2024 – via www.youtube.com.**^**NASASpaceflight (May 20, 2024).*SpaceX Performs Wet Dress Rehearsal of Fourth Starship Flight Stack*. Archived from the original on May 20, 2024. Retrieved May 20, 2024 – via YouTube.**^**"x.com".*X (formerly Twitter)*. Archived from the original on May 29, 2024. Retrieved May 30, 2024.**^**Starship Gazer (May 30, 2024). "FTS (Flight Termination System) explosives are being installed on both Ship 29 and Booster 11 this morning for the upcoming Starship test flight 4. Very exciting pre-launch milestone!".*X (formerly Twitter)*. Archived from the original on May 30, 2024. Retrieved May 31, 2024.**^**"x.com".*X (formerly Twitter)*. Archived from the original on June 6, 2024. Retrieved June 6, 2024.**^**"SpaceX - Updates".*SpaceX*. May 24, 2024. Archived from the original on June 13, 2024. Retrieved June 16, 2024.**^**Robinson-Smith, Will (June 6, 2024). "SpaceX accomplishes first soft splashdown of Starship, Super Heavy Booster on Flight 4 mission".*Spaceflight Now*. Retrieved June 16, 2024.**^**"Building Upon Accomplishments: What's New on Starship 29 & Booster 11?".*Ringwatchers*. June 7, 2024. Retrieved June 16, 2024.**^**Morales, Mia (June 16, 2024). "SpaceX begins building second Starbase launch tower, week after fourth launch".*ValleyCentral.com*. Retrieved June 17, 2024.**^**"Starship finds success on fourth flight test". June 5, 2024.**^**"Following IFT-3 milestones, SpaceX prepares for fourth Starship flight". March 22, 2024.**^**"STARSHIP'S FOURTH FLIGHT TEST".*SpaceX.com*. May 24, 2024. Archived from the original on June 1, 2024. Retrieved May 24, 2024.**^**SPACE.com, Mike Wall. "SpaceX Starship Blasts through Plasma on Return from Ambitious Test Flight".*Scientific American*. Archived from the original on June 6, 2024. Retrieved June 6, 2024.**^**Mike Wall (September 24, 2024). "SpaceX fishes Starship Super Heavy booster out of the sea (photo)".*Space.com*.**^**Foust, Jeff (October 9, 2024). "NASA "really looking forward" to next Starship test flight".*SpaceNews*. Retrieved October 13, 2024.**^**Harwood, William (June 6, 2024). "SpaceX's Super Heavy-Starship rocket launches on "epic" test flight".*CBS News*. Archived from the original on June 6, 2024. Retrieved June 6, 2024.**^**Youtube.com, Ellie in Space (June 7, 2024). "Elon Musk discusses Starship's 4th Flight".*YouTube*. Archived from the original on June 7, 2024. Retrieved June 7, 2024.**^**"Starship Flight Four".*SpaceX*. Archived from the original on June 1, 2024. Retrieved June 7, 2024.**^**Daleo, Jack (June 6, 2024). "SpaceX Starship's Fourth Test Flight Is Rocket's Most Successful Yet".*FLYING Magazine*. Archived from the original on June 6, 2024. Retrieved June 6, 2024.**^**Foust, Jeff (June 6, 2024). "Starship survives reentry during fourth test flight".*SpaceNews*. Archived from the original on June 6, 2024. Retrieved June 6, 2024.**^**Masso, Steven (June 12, 2024). "FAA not requiring investigation into fourth Starship launch".*ValleyCentral*. Retrieved June 17, 2024.
true
true
true
null
2024-10-13 00:00:00
2024-03-14 00:00:00
https://upload.wikimedia…test_4_patch.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
11,413,530
http://www.nytimes.com/2016/04/03/business/after-wikileaks-revelation-greece-asks-imf-to-clarify-bailout-plan.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,838,546
http://deanzchen.com/computer-science-education-and-math
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,709,061
http://golang.org/doc/go1.2
Go 1.2 Release Notes
null
# Go 1.2 Release Notes ## Introduction to Go 1.2 Since the release of Go version 1.1 in April, 2013, the release schedule has been shortened to make the release process more efficient. This release, Go version 1.2 or Go 1.2 for short, arrives roughly six months after 1.1, while 1.1 took over a year to appear after 1.0. Because of the shorter time scale, 1.2 is a smaller delta than the step from 1.0 to 1.1, but it still has some significant developments, including a better scheduler and one new language feature. Of course, Go 1.2 keeps the promise of compatibility. The overwhelming majority of programs built with Go 1.1 (or 1.0 for that matter) will run without any changes whatsoever when moved to 1.2, although the introduction of one restriction to a corner of the language may expose already-incorrect code (see the discussion of the use of nil). ## Changes to the language In the interest of firming up the specification, one corner case has been clarified, with consequences for programs. There is also one new language feature. ### Use of nil The language now specifies that, for safety reasons, certain uses of nil pointers are guaranteed to trigger a run-time panic. For instance, in Go 1.0, given code like ``` type T struct { X [1<<24]byte Field int32 } func main() { var x *T ... } ``` the `nil` pointer `x` could be used to access memory incorrectly: the expression `x.Field` could access memory at address `1<<24` . To prevent such unsafe behavior, in Go 1.2 the compilers now guarantee that any indirection through a nil pointer, such as illustrated here but also in nil pointers to arrays, nil interface values, nil slices, and so on, will either panic or return a correct, safe non-nil value. In short, any expression that explicitly or implicitly requires evaluation of a nil address is an error. The implementation may inject extra tests into the compiled program to enforce this behavior. Further details are in the design document. *Updating*: Most code that depended on the old behavior is erroneous and will fail when run. Such programs will need to be updated by hand. ### Three-index slices Go 1.2 adds the ability to specify the capacity as well as the length when using a slicing operation on an existing array or slice. A slicing operation creates a new slice by describing a contiguous section of an already-created array or slice: ``` var array [10]int slice := array[2:4] ``` The capacity of the slice is the maximum number of elements that the slice may hold, even after reslicing; it reflects the size of the underlying array. In this example, the capacity of the `slice` variable is 8. Go 1.2 adds new syntax to allow a slicing operation to specify the capacity as well as the length. A second colon introduces the capacity value, which must be less than or equal to the capacity of the source slice or array, adjusted for the origin. For instance, ``` slice = array[2:4:7] ``` sets the slice to have the same length as in the earlier example but its capacity is now only 5 elements (7-2). It is impossible to use this new slice value to access the last three elements of the original array. In this three-index notation, a missing first index (`[:i:j]` ) defaults to zero but the other two indices must always be specified explicitly. It is possible that future releases of Go may introduce default values for these indices. Further details are in the design document. *Updating*: This is a backwards-compatible change that affects no existing programs. ## Changes to the implementations and tools ### Pre-emption in the scheduler In prior releases, a goroutine that was looping forever could starve out other goroutines on the same thread, a serious problem when GOMAXPROCS provided only one user thread. In Go 1.2, this is partially addressed: The scheduler is invoked occasionally upon entry to a function. This means that any loop that includes a (non-inlined) function call can be pre-empted, allowing other goroutines to run on the same thread. ### Limit on the number of threads Go 1.2 introduces a configurable limit (default 10,000) to the total number of threads a single program may have in its address space, to avoid resource starvation issues in some environments. Note that goroutines are multiplexed onto threads so this limit does not directly limit the number of goroutines, only the number that may be simultaneously blocked in a system call. In practice, the limit is hard to reach. The new `SetMaxThreads` function in the `runtime/debug` package controls the thread count limit. *Updating*: Few functions will be affected by the limit, but if a program dies because it hits the limit, it could be modified to call `SetMaxThreads` to set a higher count. Even better would be to refactor the program to need fewer threads, reducing consumption of kernel resources. ### Stack size In Go 1.2, the minimum size of the stack when a goroutine is created has been lifted from 4KB to 8KB. Many programs were suffering performance problems with the old size, which had a tendency to introduce expensive stack-segment switching in performance-critical sections. The new number was determined by empirical testing. At the other end, the new function `SetMaxStack` in the `runtime/debug` package controls the *maximum* size of a single goroutine’s stack. The default is 1GB on 64-bit systems and 250MB on 32-bit systems. Before Go 1.2, it was too easy for a runaway recursion to consume all the memory on a machine. *Updating*: The increased minimum stack size may cause programs with many goroutines to use more memory. There is no workaround, but plans for future releases include new stack management technology that should address the problem better. ### Cgo and C++ The `cgo` command will now invoke the C++ compiler to build any pieces of the linked-to library that are written in C++; the documentation has more detail. ### Godoc and vet moved to the go.tools subrepository Both binaries are still included with the distribution, but the source code for the godoc and vet commands has moved to the go.tools subrepository. Also, the core of the godoc program has been split into a library, while the command itself is in a separate directory. The move allows the code to be updated easily and the separation into a library and command makes it easier to construct custom binaries for local sites and different deployment methods. *Updating*: Since godoc and vet are not part of the library, no client Go code depends on their source and no updating is required. The binary distributions available from golang.org include these binaries, so users of these distributions are unaffected. When building from source, users must use “go get” to install godoc and vet. (The binaries will continue to be installed in their usual locations, not `$GOPATH/bin` .) ``` $ go get code.google.com/p/go.tools/cmd/godoc $ go get code.google.com/p/go.tools/cmd/vet ``` ### Status of gccgo We expect the future GCC 4.9 release to include gccgo with full support for Go 1.2. In the current (4.8.2) release of GCC, gccgo implements Go 1.1.2. ### Changes to the gc compiler and linker Go 1.2 has several semantic changes to the workings of the gc compiler suite. Most users will be unaffected by them. The `cgo` command now works when C++ is included in the library being linked against. See the `cgo` documentation for details. The gc compiler displayed a vestigial detail of its origins when a program had no `package` clause: it assumed the file was in package `main` . The past has been erased, and a missing `package` clause is now an error. On the ARM, the toolchain supports “external linking”, which is a step towards being able to build shared libraries with the gc toolchain and to provide dynamic linking support for environments in which that is necessary. In the runtime for the ARM, with `5a` , it used to be possible to refer to the runtime-internal `m` (machine) and `g` (goroutine) variables using `R9` and `R10` directly. It is now necessary to refer to them by their proper names. Also on the ARM, the `5l` linker (sic) now defines the `MOVBS` and `MOVHS` instructions as synonyms of `MOVB` and `MOVH` , to make clearer the separation between signed and unsigned sub-word moves; the unsigned versions already existed with a `U` suffix. ### Test coverage One major new feature of `go test` is that it can now compute and, with help from a new, separately installed “go tool cover” program, display test coverage results. The cover tool is part of the `go.tools` subrepository. It can be installed by running ``` $ go get code.google.com/p/go.tools/cmd/cover ``` The cover tool does two things. First, when “go test” is given the `-cover` flag, it is run automatically to rewrite the source for the package and insert instrumentation statements. The test is then compiled and run as usual, and basic coverage statistics are reported: ``` $ go test -cover fmt ok fmt 0.060s coverage: 91.4% of statements $ ``` Second, for more detailed reports, different flags to “go test” can create a coverage profile file, which the cover program, invoked with “go tool cover”, can then analyze. Details on how to generate and analyze coverage statistics can be found by running the commands ``` $ go help testflag $ go tool cover -help ``` ### The go doc command is deleted The “go doc” command is deleted. Note that the `godoc` tool itself is not deleted, just the wrapping of it by the `go` command. All it did was show the documents for a package by package path, which godoc itself already does with more flexibility. It has therefore been deleted to reduce the number of documentation tools and, as part of the restructuring of godoc, encourage better options in future. *Updating*: For those who still need the precise functionality of running ``` $ go doc ``` in a directory, the behavior is identical to running ``` $ godoc . ``` ### Changes to the go command The `go get` command now has a `-t` flag that causes it to download the dependencies of the tests run by the package, not just those of the package itself. By default, as before, dependencies of the tests are not downloaded. ## Performance There are a number of significant performance improvements in the standard library; here are a few of them. - The `compress/bzip2` decompresses about 30% faster. - The `crypto/des` package is about five times faster. - The `encoding/json` package encodes about 30% faster. - Networking performance on Windows and BSD systems is about 30% faster through the use of an integrated network poller in the runtime, similar to what was done for Linux and OS X in Go 1.1. ## Changes to the standard library ### The archive/tar and archive/zip packages The `archive/tar` and `archive/zip` packages have had a change to their semantics that may break existing programs. The issue is that they both provided an implementation of the `os.FileInfo` interface that was not compliant with the specification for that interface. In particular, their `Name` method returned the full path name of the entry, but the interface specification requires that the method return only the base name (final path element). *Updating*: Since this behavior was newly implemented and a bit obscure, it is possible that no code depends on the broken behavior. If there are programs that do depend on it, they will need to be identified and fixed manually. ### The new encoding package There is a new package, `encoding` , that defines a set of standard encoding interfaces that may be used to build custom marshalers and unmarshalers for packages such as `encoding/xml` , `encoding/json` , and `encoding/binary` . These new interfaces have been used to tidy up some implementations in the standard library. The new interfaces are called `BinaryMarshaler` , `BinaryUnmarshaler` , `TextMarshaler` , and `TextUnmarshaler` . Full details are in the documentation for the package and a separate design document. ### The fmt package The `fmt` package’s formatted print routines such as `Printf` now allow the data items to be printed to be accessed in arbitrary order by using an indexing operation in the formatting specifications. Wherever an argument is to be fetched from the argument list for formatting, either as the value to be formatted or as a width or specification integer, a new optional indexing notation `[` *n*`]` fetches argument *n* instead. The value of *n* is 1-indexed. After such an indexing operating, the next argument to be fetched by normal processing will be *n*+1. For example, the normal `Printf` call ``` fmt.Sprintf("%c %c %c\n", 'a', 'b', 'c') ``` would create the string `"a b c"` , but with indexing operations like this, ``` fmt.Sprintf("%[3]c %[1]c %c\n", 'a', 'b', 'c') ``` the result is “`"c a b"` . The `[3]` index accesses the third formatting argument, which is `'c'` , `[1]` accesses the first, `'a'` , and then the next fetch accesses the argument following that one, `'b'` . The motivation for this feature is programmable format statements to access the arguments in different order for localization, but it has other uses: ``` log.Printf("trace: value %v of type %[1]T\n", expensiveFunction(a.b[c])) ``` *Updating*: The change to the syntax of format specifications is strictly backwards compatible, so it affects no working programs. ### The text/template and html/template packages The `text/template` package has a couple of changes in Go 1.2, both of which are also mirrored in the `html/template` package. First, there are new default functions for comparing basic types. The functions are listed in this table, which shows their names and the associated familiar comparison operator. Name | Operator | | ---|---|---| `eq` | `==` | | `ne` | `!=` | | `lt` | `<` | | `le` | `<=` | | `gt` | `>` | | `ge` | `>=` | These functions behave slightly differently from the corresponding Go operators. First, they operate only on basic types (`bool` , `int` , `float64` , `string` , etc.). (Go allows comparison of arrays and structs as well, under some circumstances.) Second, values can be compared as long as they are the same sort of value: any signed integer value can be compared to any other signed integer value for example. (Go does not permit comparing an `int8` and an `int16` ). Finally, the `eq` function (only) allows comparison of the first argument with one or more following arguments. The template in this example, ``` {{if eq .A 1 2 3}} equal {{else}} not equal {{end}} ``` reports “equal” if `.A` is equal to *any* of 1, 2, or 3. The second change is that a small addition to the grammar makes “if else if” chains easier to write. Instead of writing, ``` {{if eq .A 1}} X {{else}} {{if eq .A 2}} Y {{end}} {{end}} ``` one can fold the second “if” into the “else” and have only one “end”, like this: ``` {{if eq .A 1}} X {{else if eq .A 2}} Y {{end}} ``` The two forms are identical in effect; the difference is just in the syntax. *Updating*: Neither the “else if” change nor the comparison functions affect existing programs. Those that already define functions called `eq` and so on through a function map are unaffected because the associated function map will override the new default function definitions. ### New packages There are two new packages. - The `encoding` package is described above. - The `image/color/palette` package provides standard color palettes. ### Minor changes to the library The following list summarizes a number of minor changes to the library, mostly additions. See the relevant package documentation for more information about each change. - The `archive/zip` package adds the`DataOffset` accessor to return the offset of a file’s (possibly compressed) data within the archive. - The `bufio` package adds`Reset` methods to`Reader` and`Writer` . These methods allow the`Readers` and`Writers` to be re-used on new input and output readers and writers, saving allocation overhead. - The `compress/bzip2` can now decompress concatenated archives. - The `compress/flate` package adds a`Reset` method on the`Writer` , to make it possible to reduce allocation when, for instance, constructing an archive to hold multiple compressed files. - The `compress/gzip` package’s`Writer` type adds a`Reset` so it may be reused. - The `compress/zlib` package’s`Writer` type adds a`Reset` so it may be reused. - The `container/heap` package adds a`Fix` method to provide a more efficient way to update an item’s position in the heap. - The `container/list` package adds the`MoveBefore` and`MoveAfter` methods, which implement the obvious rearrangement. - The `crypto/cipher` package adds the new GCM mode (Galois Counter Mode), which is almost always used with AES encryption. - The `crypto/md5` package adds a new`Sum` function to simplify hashing without sacrificing performance. - Similarly, the `crypto/sha1` package adds a new`Sum` function. - Also, the `crypto/sha256` package adds`Sum256` and`Sum224` functions. - Finally, the `crypto/sha512` package adds`Sum512` and`Sum384` functions. - The `crypto/x509` package adds support for reading and writing arbitrary extensions. - The `crypto/tls` package adds support for TLS 1.1, 1.2 and AES-GCM. - The `database/sql` package adds a`SetMaxOpenConns` method on`DB` to limit the number of open connections to the database. - The `encoding/csv` package now always allows trailing commas on fields. - The `encoding/gob` package now treats channel and function fields of structures as if they were unexported, even if they are not. That is, it ignores them completely. Previously they would trigger an error, which could cause unexpected compatibility problems if an embedded structure added such a field. The package also now supports the generic`BinaryMarshaler` and`BinaryUnmarshaler` interfaces of the`encoding` package described above. - The `encoding/json` package now will always escape ampersands as “\u0026” when printing strings. It will now accept but correct invalid UTF-8 in`Marshal` (such input was previously rejected). Finally, it now supports the generic encoding interfaces of the`encoding` package described above. - The `encoding/xml` package now allows attributes stored in pointers to be marshaled. It also supports the generic encoding interfaces of the`encoding` package described above through the new`Marshaler` ,`Unmarshaler` , and related`MarshalerAttr` and`UnmarshalerAttr` interfaces. The package also adds a`Flush` method to the`Encoder` type for use by custom encoders. See the documentation for`EncodeToken` to see how to use it. - The `flag` package now has a`Getter` interface to allow the value of a flag to be retrieved. Due to the Go 1 compatibility guidelines, this method cannot be added to the existing`Value` interface, but all the existing standard flag types implement it. The package also now exports the`CommandLine` flag set, which holds the flags from the command line. - The `go/ast` package’s`SliceExpr` struct has a new boolean field,`Slice3` , which is set to true when representing a slice expression with three indices (two colons). The default is false, representing the usual two-index form. - The `go/build` package adds the`AllTags` field to the`Package` type, to make it easier to process build tags. - The `image/draw` package now exports an interface,`Drawer` , that wraps the standard`Draw` method. The Porter-Duff operators now implement this interface, in effect binding an operation to the draw operator rather than providing it explicitly. Given a paletted image as its destination, the new`FloydSteinberg` implementation of the`Drawer` interface will use the Floyd-Steinberg error diffusion algorithm to draw the image. To create palettes suitable for such processing, the new`Quantizer` interface represents implementations of quantization algorithms that choose a palette given a full-color image. There are no implementations of this interface in the library. - The `image/gif` package can now create GIF files using the new`Encode` and`EncodeAll` functions. Their options argument allows specification of an image`Quantizer` to use; if it is`nil` , the generated GIF will use the`Plan9` color map (palette) defined in the new`image/color/palette` package. The options also specify a`Drawer` to use to create the output image; if it is`nil` , Floyd-Steinberg error diffusion is used. - The `Copy` method of the`io` package now prioritizes its arguments differently. If one argument implements`WriterTo` and the other implements`ReaderFrom` ,`Copy` will now invoke`WriterTo` to do the work, so that less intermediate buffering is required in general. - The `net` package requires cgo by default because the host operating system must in general mediate network call setup. On some systems, though, it is possible to use the network without cgo, and useful to do so, for instance to avoid dynamic linking. The new build tag`netgo` (off by default) allows the construction of a`net` package in pure Go on those systems where it is possible. - The `net` package adds a new field`DualStack` to the`Dialer` struct for TCP connection setup using a dual IP stack as described in RFC 6555. - The `net/http` package will no longer transmit cookies that are incorrect according to RFC 6265. It just logs an error and sends nothing. Also, the`net/http` package’s`ReadResponse` function now permits the`*Request` parameter to be`nil` , whereupon it assumes a GET request. Finally, an HTTP server will now serve HEAD requests transparently, without the need for special casing in handler code. While serving a HEAD request, writes to a`Handler` ’s`ResponseWriter` are absorbed by the`Server` and the client receives an empty body as required by the HTTP specification. - The `os/exec` package’s`Cmd.StdinPipe` method returns an`io.WriteCloser` , but has changed its concrete implementation from`*os.File` to an unexported type that embeds`*os.File` , and it is now safe to close the returned value. Before Go 1.2, there was an unavoidable race that this change fixes. Code that needs access to the methods of`*os.File` can use an interface type assertion, such as`wc.(interface{ Sync() error })` . - The `runtime` package relaxes the constraints on finalizer functions in`SetFinalizer` : the actual argument can now be any type that is assignable to the formal type of the function, as is the case for any normal function call in Go. - The `sort` package has a new`Stable` function that implements stable sorting. It is less efficient than the normal sort algorithm, however. - The `strings` package adds an`IndexByte` function for consistency with the`bytes` package. - The `sync/atomic` package adds a new set of swap functions that atomically exchange the argument with the value stored in the pointer, returning the old value. The functions are`SwapInt32` ,`SwapInt64` ,`SwapUint32` ,`SwapUint64` ,`SwapUintptr` , and`SwapPointer` , which swaps an`unsafe.Pointer` . - The `syscall` package now implements`Sendfile` for Darwin. - The `testing` package now exports the`TB` interface. It records the methods in common with the`T` and`B` types, to make it easier to share code between tests and benchmarks. Also, the`AllocsPerRun` function now quantizes the return value to an integer (although it still has type`float64` ), to round off any error caused by initialization and make the result more repeatable. - The `text/template` package now automatically dereferences pointer values when evaluating the arguments to “escape” functions such as “html”, to bring the behavior of such functions in agreement with that of other printing functions such as “printf”. - In the `time` package, the`Parse` function and`Format` method now handle time zone offsets with seconds, such as in the historical date “1871-01-01T05:33:02+00:34:08”. Also, pattern matching in the formats for those routines is stricter: a non-lowercase letter must now follow the standard words such as “Jan” and “Mon”. - The `unicode` package adds`In` , a nicer-to-use but equivalent version of the original`IsOneOf` , to see whether a character is a member of a Unicode category.
true
true
true
null
2024-10-13 00:00:00
2013-01-01 00:00:00
null
null
null
Golang
null
null
31,144,103
https://threadreaderapp.com/thread/1517846294873653249.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
11,166,303
http://www.theguardian.com/environment/commentisfree/2016/feb/23/the-guardian-view-on-air-pollution-breathe-uneasy
Boris Johnson leaves London breathing uneasy | Editorial
Editorial
Every so often a statistic emerges to send shockwaves through the most innumerate skulls. One such figure, highlighted by the Royal College of Physicians in a report on Tuesday, is the annual toll of 40,000 premature deaths attributable to outdoor air pollution. It implies that the finger can be pointed at unclean air for about 8% of all of the half million or so deaths recorded in the UK every year, a far higher proportion than is usually blamed on alcohol or obesity, two public health problems that grab more attention. Factoring indoor pollution into the mix – familiar fiends such as secondhand tobacco smoke, and overlooked enemies like spray deodorants – only strengthens the link between the air we breathe and our last gasp. To acknowledge the importance of pollution should not amount to a counsel of despair. Britain led the world in dispelling the coal-caused smogs of the 1950s with the clean air acts, and a generation later called time on leaded petrol. Such progressive past steps have contributed to far longer average lives. With determination, the great culprits of our own time, nitrogen dioxide and diesel particulates, which between them contribute to wheezing, heart disease and cancer, might be tackled the same way. In many other European metropolises, and not least in German-speaking centres, which this week dominate the top flight in a global league table of good cities to live, all sorts of serious action is under way, ranging from pedestrianisation to outright bans on the dirtiest diesel cars. But in the UK in general and London in particular, whose place in the city rankings was dragged down by its air, all urgency is lacking. Where Berlin banned the most polluting old diesel cars at the very start of the current decade, London will not do so until its very end. And – even then – Boris Johnson has announced exemptions for 300 of the Routemaster buses that make up a costly part of his personal brand, and indeed for any other old smoke-box whose driver is willing and able to come up with £12.50 a day. Across the UK, 38 out of 43 zones are in breach of EU standards on nitrogen dioxide, and the government has been hauled up over the lack of plans to comply in 16 of these. But it is London, which is not going to get there till at least 2025, that stands out in a singular shame, and Mr Johnson’s record has not helped at all. A King’s College London note suggests that many roads in central London will tend to have the highest nitrogen dioxide concentrations in the world. At one spot, in Putney, the annual quota of very high pollution hours, meant to last for all of 2016, was exhausted on 8 January. Private hire vehicles are mushrooming, not least due to Uber, but whereas one might still hope that the number of diesels among them would have started to fall, it too has risen. Running through it all is a lack of political will. If you can’t measure it you can’t manage it, the consultants say, and the VW scandal revealed how the mismeasurement of pollution can make management impossible. A few years ago, London’s government was spraying a de-icer around to “glue” pollution to the road, in tiny areas sometimes right by EU air monitoring stations, a process likened by one Labour MP to strapping an oxygen mask on to the canary down the mine. Last month a monitor in polluted Oxford Street went offline, and another – covered – device seized up from water damage shortly before the Olympics. No doubt accidents can explain these things, but does the prospect of European fines – which, as the mayor himself once acknowledged, could eventually total £300m, for nitrogen dioxide and for particulates too – encourage an overly relaxed approach to getting them fixed? And have such fines fuelled the current heightening of Mr Johnson’s hostility towards the EU? Every Briton, in the capital and beyond, should take a deep breath – and then ask themselves whether they would rather all control over its quality was passed from Brussels to London.
true
true
true
Editorial: Obesity and alcohol command more attention, but floating poisons such as diesel fumes take just as heavy a toll. London under Boris Johnson illustrates how to fail this public health challenge
2024-10-13 00:00:00
2016-02-23 00:00:00
https://i.guim.co.uk/img…f9dc1212a259dc97
article
theguardian.com
The Guardian
null
null
19,953,026
https://sandymaguire.me/blog/brilliance/
You Don't Need to Be Brilliant to Do Brilliant Work
null
Greatness is something you do, not something you are. -Sebastian Marshall # I My friend Csongor recently published a computer science paper that’s super interesting if you’re as big a nerd as I am. The details are too much to go into here, but suffice to say, it lets us do a lot of things I’d always been told were impossible. Being told you’re now allowed to do the impossible is a staggering experience. What’s more staggering is knowing the guy responsible for letting you do it. I was curious about how he’d gone about solving this problem. Part of it was a question of ego — I’d run into this problem around the same time Csongor had. But where he’d stuck with it and eventually broken through, I’d bounced off it assuming there was a *reason* for the limitation. Csongor is obviously brilliant, but I don’t think his brain works *significantly better* than mine. Rather than assuming he’d solved the problem because he was smart and I wasn’t, I took the position that he had some *skill* that I was missing. Skills, after all, can be learned. And so I asked him. How did he start working on it? What was the process like? How long did it take? His answer to the first question1 was: I wanted to understand why this limitation was there. Then learning the answer revealed that it’s actually something that can be fixed — as is always the case with these things if you think about them enough. He made sure to reiterate the point: Though I must say that the solution is really quite obvious once you know the underlying reasons, so there was not much brilliance involved. # II Sometime recently, without realizing it, I’ve become a big wig in my nerdy programming circle. All of a sudden people were throwing my name around in the company of the people I looked up to, whose work I’d always felt was far beyond my grasp. This was puzzling to me for an embarrassingly long time. What had changed? I was still the same guy as always, doing lots of experiments and having 95% of them fail on me. I was still as outspoken as ever. What changed? I think it’s that I wrote a book. All of a sudden my status jumped up a few rungs because my ideas were worthy of *a book.* I mean, it’s a great book and you should go buy a copy, but it’s nothing novel. It’s just a consolidation of lots of existing techniques, that I painstakingly put in the time and sweat to understand for myself. All of a sudden, people had a good educational resource, and it had my name attached to it. The book doesn’t pull any punches — it really and truly is a book of difficult things — but it tries to introduce the ideas as gently and usefully as possible. I think what happened is that people started thinking “man, this book is full of really hard concepts. The guy who wrote it must be really smart.” And they’re not *wrong*, but that’s not the point. Really, most of it I learned from long conversations with exceptionally kind and patient people like Renzo Carbonara and Sukant Hajra. The point is that all people see are my successes. They see this book in its finished form, but are shielded away from the tortuous months I spent writing it. They aren’t aware of just how many hours I spent fighting with LaTeX. Or of cajoling my designer-then-girlfriend to help me pick fonts. Or from the countless sleepless nights I spent spinning the ideas around in my head, trying as best I could to find something, *anything,* to grab on to. None of it was exceptionally difficult. Mostly it was just tedious. The book itself took four months to write, but the material took five years to *learn*. And that seemed like a waste of time if I wasn’t able to amortize by that helping other people learn the same things. Any idiot could have done what I did — read blog posts, think hard about them, write some code that used the idea, and then write one chapter at a time. That’s it. There was no magic. ## III The point I’m trying to make here is that, on the inside, it doesn’t feel remarkable to do “great” work. Csongor says “there was not much brilliance involved” in his work. I’m convinced that any idiot could have put together the same book that I did. The hardest part is putting in the time, and even that’s not very hard if you find the process enjoyable and meaningful. To quote Gwern: None of these seems special to me. Anyone could’ve compiled the DNB FAQ; anyone could’ve kept a list of online pharmacies where one could buy modafinil; someone tried something similar to my Google shutdown analysis before me (and the fancier statistics were all standard tools). If I have done anything meritorious with them, it was perhaps simply putting more work into them than someone else would have. Or as Joe Kachmar says: It’s really nice to realize that most/all of the work on these big projects is just folks who have relentlessly kept tugging on some thread until it unravels neatly for them. There likely are problems out there that are *brilliance-constrained,* but I’d argue that there are 100x more problems which are merely *effort-constrained.* This is good news, because while it’s not clear how to become smarter, it’s very doable to just throw more energy at something. Maybe the problems you consider to be exceptionally hard are just ones that merely require some dedication — and a doggedness to fix, come-what-may. Though the answer to “how long did it take?” did help cement Csongor in my mind as *actually*being brilliant.↩︎ ## Related Posts If you liked this post, you might also enjoy:
true
true
true
null
2024-10-13 00:00:00
2019-04-02 00:00:00
null
null
null
null
null
null
22,834,278
https://roboton.io/tutorial/camera-simple
Virtual Robot Competitions
null
null
true
true
false
Roboton.io is a virtual robot-competition platform. Design, program and simulate robots to fulfill critical missions. All from the comfort of your browser. Build a Sumo robot or a line following robot.
2024-10-13 00:00:00
null
null
null
null
Roboton.io
null
null
14,740,136
https://dev.to/rusrushal13/publish-your-first-image-to-docker-hub
Publish your first image to Docker Hub
Rushal Verma
As you are familiar with Docker from my previous post. Let dive in to explore more. Now you know how to run a container and pull an image, now we should publish our image for others too. Why you should have all the fun ;) So what we need to publish our Docker Image? - A Dockerfile - Your App Yeah, that’s it. Why we need my app the Docker way? Historically we have to our app(maybe python app) and we need python(or all dependencies) runtime environment on our machine. But then it creates a situation where the environment on your machine has to be just so in order for your app to run as expected and for your server too where you are running the server. With Docker, you don’t need anything(no environment). You can just grab a portable Python runtime as an image, no installation necessary. Then, your build can include the base Python image right alongside your app code, ensuring that your app, its dependencies, and the runtime, all travel together. These portable images are defined by something called a Dockerfile. Dockerfile serves as the environment file inside the container. It helps in creating an isolated environment for your container, what ports will be exposed to outside world, what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile will behave exactly the same wherever it runs. So let's create a directory and make a Dockerfile. ``` FROM python:3.6 WORKDIR /app ADD . /app RUN pip install -r requirements.txt EXPOSE 80 ENV NAME world CMD [“python”, “app.py”] ``` So you have your Dockerfile. You can see the syntax is pretty easy and self-explanatory. Now we need our app. Let's create one, a python app ;) `app.py` ``` from flask import Flask import os import socket app = Flask(__name__) @app.route("/") def hello(): html = "<h3>Hello {name}!</h3>" \ "<b>Hostname:</b> {hostname}<br/>" return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname()) if __name__ == "__main__": app.run(host='0.0.0.0', port=80) ``` `requirements.txt` ``` Flask ``` Now you have all the things in order to proceed. Now just build your app. Let's Build it `ls` will now show you this ``` $ ls app.py requirements.txt Dockerfile ``` Now create the image. ``` docker build -t imagebuildinginprocess . ``` Where is your image? It’s in your local image registry. ``` $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE imagebuildinginprocess latest 4728a04a9d39 14 minutes ago 694MB ``` Lets Run it too ``` docker run -p 4000:80 imagebuildinginprocess ``` What we did here is mapping the port 4000 to the container exposed port 80. You should see a notice that Python is serving your app at http://0.0.0.0:80. But that message is coming from inside the container, which doesn’t know you mapped to port 80 of that container to 4000, making the URL http://localhost:4000. Go to that URL in a web browser to see the display content served up on a web page, including “Hello World” text and the container ID. Let's Share it :D we will be pushing our built image to the registry so that we can use it anywhere. The Docker CLI uses Docker’s public registry by default. - Log into the Docker public registry on your local machine.(If you don’t have account make it here cloud.docker.com) ``` docker login ``` - Tag the image: It is more like naming the version of the image. It’s optional but it is recommended as it helps in maintaining the version(same like ubuntu:16.04 and ubuntu:17.04) ``` docker tag imagebuildinginprocess rusrushal13/get-started:part1 ``` - Publish the image: Upload your tagged image to the repository: Once complete, the results of this upload are publicly available. If you log into Docker Hub, you will see the new image there, with its pull command. ``` docker push rusrushal13/get-started:part1 ``` Yeah, that's it, you are done. Now you can go to Docker hub and can check about it also ;). You published your first image. I found out this GitHub repository really awesome. Have a look on it https://github.com/jessfraz/dockerfiles Do give me feedbacks for improvement ;) ## Top comments (4) It is better to explicitly set docker image tag, e.g: Otherwise, it is unpredictable, which image version will be base. Thank you for correcting me. I updated the post :) Every time Docker uses the latest base image. Here it is python 2.7 base image, as it is official, You can run the container and can check it ;) BTW thanks for your feedback No, `python:latest` stays for`python:3.6` , you can see it on Dockehub: library/python
true
true
true
. Tagged with digitalproductschool, devops, docker, dockerimage.
2024-10-13 00:00:00
2017-07-09 00:00:00
https://dev-to-uploads.s…46ytpt1hl2rv.jpg
article
dev.to
DEV Community
null
null
22,765,192
https://imply.io/post/hadoop-indexing-apache-druid-configuration-best-practices
Hadoop Indexing for Apache Druid at Scale - Configuration Best Practices - Imply
Rommel Garcia
Last Call—and Know Before You Go—For Druid Summit 2024 Druid Summit 2024 is almost here! Learn what to expect—so you can block off your schedule and make the most of this event. Learn MoreBatch loads into analytic platforms are still the norm and the trend is moving towards more data being processed and served for ad hoc querying, which requires low latency performance. When Hadoop is involved in pushing data into Druid, performance of the Hadoop indexer is key. The challenge is that as the size of the dataset grows, the previously running Hadoop indexer job is no longer applicable. It has to be tuned to meet the ingest SLA, especially when the size of the dataset is in 10s or 100s of terabytes. The good news is that once the Hadoop indexer job is tuned for this scale, it will work for larger data sets, with the only variable being the available resources in Hadoop. There are several things to consider when running a large scale Hadoop indexing job. When working on a shared Hadoop cluster, the YARN queue is subdivided accordingly and there’s strict enforcement of the maximum amount of memory in the queue that can be used by any given user or group. Each queue also has its own priority. I recently experienced the following: A 60TB raw dataset was ingested in around 6 hours (10TB/hr) using a 40TB queue size. There were several test runs using smaller queues, but these runs took a very long time to finish the ingest. It was clear that the performance of the ingest was directly proportional to the size of the queue. Job priority and preemption go hand-in-hand. Any job that comes in with VERY_HIGH priority will preempt other jobs with lower priority. Meaning YARN will kill mappers and reducers from other running jobs to guarantee minimum memory requirements for the higher priority indexing job. For a very large Hadoop indexing job to finish on time, set the priority level to VERY_HIGH. You will know if other jobs are taking away resources from your indexing job when you see error exit code of either 137 or 143 from mappers and reducers. Compressing the output of the mappers will reduce the latency to write it to disk and hand it to reducers. Consider compressing the input files for the mappers. Both Snappy and LZO are good options but I prefer LZO when the files are larger than the block size because its splittable, and this promotes more parallelism. It will also prevent it from taking too much space in the network. The network between Hadoop and Druid will define how fast segments can be pushed to deep storage, and how fast it can be published and balanced to historicals. Scheduling indexing jobs when the Hadoop cluster is not that busy will help, but often there’s a very small time available to take advantage of. What will further complicate the publishing of segments is when the Hadoop cluster is on-prem and druid is in cloud, and hdfs is used as a deep storage. Fig. 1. Segment Publishing In Fig. 1 above, the amount of time taken to publish the segments to Druid will depend on the speed of the network link is between the two data centers. Network usage needs to be monitored during indexing to ascertain peak throughput. Of course, co-locating Hadoop and Druid is ideal. Fig. 2. Saturated Network During Segment Publishing As shown in Fig. 2, the total segment size being published to Druid in the cloud was around 12 TB over 9 Gbps of network bandwidth. It maxed out at 8.76 Gbps since there is other non-Druid data running through the network. Increasing bandwidth above 9 Gbps, for example to 40 Gbps, will improve the segment sync time by 4x. The following properties below are the knobs that can be turned to improve the performance of loading/dropping of segments and distributing the segments uniformly across historicals. **druid.coordinator.loadqueuepeon.type** This helps with balancing the segments, segment loading or drops across historicals. The default value is curator which is single threaded. It is best to use http since this is more performant and is multi-threaded. **druid.coordinator.loadqueuepeon.http.batchSize** This defines how many segments to load/drop in one HTTP request. Increase this value until the network is saturated. This must be smaller than druid.segmentCache.numLoadingThreads. The default value is 1 but can be increased. In one of our very large indexing jobs, setting this value to 30 was optimal. **druid.segmentCache.numLoadingThreads** Concurrently loading/dropping segments from deep storage is the goal of this property. The default value is 10. Be careful not to set this too high as the I/O will saturate and queries are significantly impacted especially if indexing and querying are happening at the same time. As you increase the value, monitor the I/O and network throughput. You can use iostat to monitor how much data is written/read via the metrics MB_wrtn/s and MB_read/s. If you keep increasing the value and there is no more improvement on MB_wrtn/s and MB_read/s, then there is no more bandwidth left to consume. **balancerComputeThreads** Sets how many threads will be used to balance the segments across the historicals. The default value is 1. If you see in the coordinator log that the balancer cycle is taking more than a minute, increase the value until there are no more segments stuck in the pipe. You should see the occurrence “Load Queues:” which get logged once per run. **maxSegmentsToMove** Specifies the ceiling for how many segments can be moved at any given time. If the input data to be indexed is greater than 10 TB, using at least 600 will make the segment balancing much faster. This also depends on the network bandwidth Druid is running on. Cloud providers typically have a very good ways of increasing the network pipe such as # of cpu cores, # of vNics that can be attached to vms, etc. **maxSegmentsInNodeLoadingQueue** The default value to this is 0 which is unbounded. It is always a good idea to cap this at a certain number so segment publishing is controlled at a rate the network can support. You can start with 100 and increase from there. **druid.coordinator.balancer.strategy** There are 3 types of balancers – cachingCost, diskNormalized and random. It is recommended to use cachingCost as this is more efficient in distributing the segments across historicals evenly. **druid.segmentCache.numBootstrapThreads** For very large clusters, it is recommended to use higher than the default value of 10. For example, if you have an off-heap value of at least 100 GB on each historical and the average segment size is 500 MB, you have about 200 segments that you can fit into that memory. Applying this value will speed up the loading time of segments 20x upon startup. Given a very large dataset to index, the default 10,000,000 maximum split size in the Hadoop cluster might not be enough. Set mapreduce.job.split.metainfo.maxsize = -1 which means unlimited splits. This begs the question of if I have this many mappers, how many reducers should I use and what memory settings should I apply in the containers since there are over a hundred million of blocks to be processed? If the number of mappers and reducers are not set right and the allocated memory is insufficient, you will get the following error. For a 10 TB/hr ingest rate, the parameters used were the following. Anything lower than these settings will either make the indexing very slow or will lead to a lot of failed containers. The number of reducers is determined by the numShards or targetPartition size property in the ingest spec. The formula below explains a starting point for determining the correct number of reducers. `# of reducers = # of partitions by segmentGranularity x numShards` If your ingest spec is using segmentGranularity of MONTH and there are three months of data to be indexed and the numShards specified is 5,000, then the total number of reducers is 15,000. This might require an iterative approach to ensure that the segment size is between 300 MB to 700 MB. This is critical since the foundation of query performance lies in the size of the segments. If the last reducer takes a very long time to finish, this means that the number of reducers is very high or the memory settings are very low. Enabling the reducers to start reading the output of the mappers also speeds up the job. Not by a big factor but the impact is noticeable for a very large job. You can define how many reducers that will start reading data from map outputs by using mapreduce.job.reduce.slowstart.completedmaps property. Using 0.8 value for this property allowed for a good performance between mappers and reducers. Also, always reuse your containers. Setting up and tearing down containers takes time and by reusing them, reducers can run faster. Use this parameter to specify how much containers you want to reduce: mapred.job.reuse.jvm.num.tasks. A rule of thumb is to specify the total number of reducers. Block size matters in large scale Hadoop indexing for Druid. It reduces the mapper and reducer time up to a factor of 3 when using 512 MB vs. 128 MB. This reduces shuffling time, loading time of the blocks, effectively using memory, reduces cpu time spent, and less failures mean faster container job execution. The factor which has the biggest impact on job completion is how you manage I/O operations in Hadoop. All of the memory management recommendations above greatly reduce the trips to disk. Ideally, mappers and reducers should only spill to disk once. More than that will exponentially slow the job. As shown in Fig. 3 below, if you take the ratio of spilled records for mapper and reducer, it should be less than or equal to 1. Listed below is the formula to measure the number of spills to disk. number of spills (map) = spilled records/map output records = <=1 number of spills (reduce) = spilled records/reduce input records = <=1 Fig. 3. Job Counters for Map/Reduce Based on the formula given above, in Figure 3 the # of spills (map) is 1 while for reduce spills is 0.6. This is a very efficient Hadoop indexing job. With a shared Hadoop cluster, it is very common to have a lot of failures, especially for very large jobs. But most of the failures can be controlled based on all the recommended settings above. Once you reach this kind of efficiency, all you have to consider is adding more machines to process larger datasets. **A great way to get hands-on with Druid is through a Free Imply Download or Imply Cloud Trial.** Last Call—and Know Before You Go—For Druid Summit 2024 Druid Summit 2024 is almost here! Learn what to expect—so you can block off your schedule and make the most of this event. Learn MoreThe Top Five Articles from the Imply Developer Center (Fall 2024 edition) Build, troubleshoot, and learn—with the top five articles, lessons, and tutorials from Imply’s Developer Center for Fall 2024. Learn MoreFrom Code to Connections: Druid Summit 2024 is Live & In-Person This October This October, head to Redwood City, California, for the first ever in-person Druid Summit. Meet new people, reconnect with previous acquaintances, and learn from the best and brightest minds in the Apache Druid... Learn More
true
true
true
When Hadoop is pushing data into Druid, Hadoop indexer performance is key and becomes challenging at scale. There are a quite a few things to consider when running large scale Hadoop indexing.
2024-10-13 00:00:00
2023-03-12 00:00:00
https://imply.io/wp-cont…or-mapreduce.png
article
imply.io
Imply
null
null
2,459,128
http://www.jimonlight.com/2010/05/24/cirque-du-soleils-ka-melts-jimonlight-coms-mind/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,919,983
http://deswal.org/saas/to-reduce-churn-your-saas-needs-to-be-adopted-widely-and-deeply/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,873,656
https://vimeo.com/89936101
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,365,074
https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/09/28/visual-studio-code-tools-for-ai-extension/
Visual Studio Code Tools for AI Extension
null
# Visual Studio Code Tools for AI Extension Visual Studio Code Tools for AI is an extension to build, test, and deploy Deep Learning / AI solutions in Microsoft Visual Studio Code. This allows you to develop deep learning and AI solutions across Windows and MacOS This extension seamlessly integrates with Azure Machine Learning for robust experimentation capabilities, including but not limited to submitting data preparation and model training jobs transparently to different compute targets. Additionally, it provides support for custom metrics and run history tracking, enabling data science reproducibility and auditing. Enterprise ready collaboration, allow to securely work on project with other people. VS Code Tools for AI is a cross-platform extension that supports deep learning frameworks including Microsoft Cognitive Toolkit (CNTK), Google TensorFlow and more. Because it's an IDE we've enabled familiar code editor features like syntax highlighting, IntelliSense (auto-completion) and text auto formatting. You can interactively test your deep learning application in your local environment using step-through debugging on local variables and models. This extension makes it easy to train models on your local computer or you can submit jobs to the cloud by using our integration with Azure Machine Learning. You can submit jobs to different compute targets like Spark clusters, Azure GPU virtual machines and more ### Installing the Extension - First, install Visual Studio Code then install **Tools for AI**extension by pressing**F1**or**ctrl+shift+p**to open command palette, select**Install Extension**and type**tools for AI**. ### Commands The extension provides several commands in the Command Palette for working with deep learning and machine learning: **AI: List Jobs**: View list of recent jobs you've submitted and their details**AI: Open Azure ML Sample Explorer**: Quickly get started with machine learning and deep learning experimentation by downloading sample projects you can run and modify to meet your needs**AI: Azure ML - Set Subscription**: Set your Azure Subscription to use for Azure Machine Learning experimentation**AI: Azure ML - Open Terminal**: Open Azure CLI terminal to access full Azure feature set**AI: Add Platform Configuration**: Configure Azure Machine learning compute target ### Try this extension using the available sample project To open the explorer, do as follow: - Open the command palette (View > **Command Palette**or**Ctrl+Shift+P**). - Enter "ML Sample". - You get a recommendation for "Machine Learning: Open Azure Machine Learning Samples Explorer", select it and press enter. #### Creating a new project from the sample explorer You can browse different samples and get more information about them. Let's browse until finding the "Classifying Iris" sample. To create a new project based on this sample do the following: - Click install button on the project sample, notice the commands being prompted, walking you through the steps of creating a new project. **Enter a name**for the project, for example "Iris".**Enter a folder**to create your project and press enter.**Select an existing workspace**and press enter. The project will then be created. You will need to be logged-in to access your Azure resource. From the embedded terminal enter "az login" and follow the instruction. #### Submitting a job to train your model locally or in the cloud Now that the new project is open in Visual Studio Code, you can submit a model training job to your different compute targets (local or VM with docker such as the https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft-ads.linux-data-science-vm-ubuntu). Visual Studio Code Tools for AI provides multiple ways to submit a model training job. - Context Menu (right click) - **Machine Learning: Submit Job**. - From the command palette: "Machine Learning: Submit Job". - Alternatively, you can run the command directly using Azure CLI, Machine Learning Commands, using the embedded terminal. Open iris_sklearn.py, right click and select **Machine Learning: Submit Job**. - Select your platform: "Azure Machine Learning". - Select your run-configuration: "Docker-Python." If it is the first time your submit a job, you receive a message "No Machine Learning configuration found, creating...". A JSON file is opened, save it ( Ctrl+S). Once the job is submitted, the embedded-terminal displays the progress of the runs. #### View recent job performance and details Once the jobs are submitted, you can list the jobs from the run history. - Open the command palette (View > **Command Palette**or**Ctrl+Shift+P**). - Enter " **AI List**." - You get a recommendation for "AI: List Jobs", select and press enter. - Select the platform "Azure Machine Learning." The Job List View opens and displays all the runs and some related information. To view the results of a job, click on the **job ID** link to see detailed information. ### Additional Resource See the key announcements from Ignite 2017 https://myignite.microsoft.com Joseph Sirosh, Corporate Vice President of the Cloud AI Platform, as he dives deep into the latest additions to the Microsoft AI platform and capabilities. Innovations in AI let any developer and data scientist infuse intelligence into their applications and target entirely new scenarios. https://myignite.microsoft.com/videos/56555 ## Comments **Anonymous** September 29, 2017 Hi I have written a blog on getting started with the VS Code Extension: https://neelbhatt40.wordpress.com/2017/09/28/visual-studio-code-tools-for-artificial-intelligenceai-first-look/**Anonymous** September 29, 2017 Hi Neel, how did you find the experience? **Anonymous** November 15, 2017 so what is the difference between VS code for AI VS Azure ML workbench?**Anonymous** November 17, 2017 Hi Rajesh see this walk through https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/11/02/visual-studio-code-and-the-ai-extension-with-azure-machine-learning-work-bench/
true
true
true
null
2024-10-13 00:00:00
2017-09-28 00:00:00
https://learn.microsoft.…-graph-image.png
website
microsoft.com
MicrosoftLearn
null
null
38,631,409
https://timetravel-pearl.vercel.app/viewer?url=https://github.com/gediminastub/timetravel-files&branch=test1
Create T3 App
null
null
true
true
false
Generated by create-t3-app
2024-10-13 00:00:00
null
null
null
null
null
null
null
2,979,212
https://github.com/cbmi/django-forkit
GitHub - chop-dbhi/django-forkit: **INACTIVE** Adds support for shallow and deep forking (copying) Django model instances.
Chop-Dbhi
Django-Forkit is composed of a set of utility functions for *forking*, *resetting*, and *diffing* model objects. Below are a list of the current utility functions: Creates and returns a new object that is identical to `reference` . `fields` - A list of fields to fork. If a falsy value, the fields will be inferred depending on the value of`deep` .`exclude` - A list of fields to not fork (not applicable if`fields` is defined)`deep` - If`True` , traversing all related objects and creates forks of them as well, effectively creating a new*tree*of objects.`commit` - If`True` , all forks (including related objects) will be saved in the order of dependency. If`False` , all commits are stashed away until the root fork is committed.`**kwargs` - Any additional keyword arguments are passed along to all signal receivers. Useful for altering runtime behavior in signal receivers. `fork(reference, [fields=None], [exclude=('pk',)], [deep=False], [commit=True], [**kwargs])` Same parameters as above, except that an explicit `instance` is rquired and will result in an in-place update of `instance` . For shallow resets, only the local non-relational fields will be updated. For deep resets, *direct* foreign keys will be traversed and reset. *Many-to-many and reverse foreign keys are not attempted to be reset because the comparison between the related objects for reference and the related objects for instance becomes ambiguous.* `reset(reference, instance, [fields=None], [exclude=('pk',)], [deep=False], [commit=True], [**kwargs])` Commits any unsaved changes to a forked or reset object. `commit(reference, [**kwargs])` Performs a *diff* between two model objects of the same type. The output is a `dict` of differing values relative to `reference` . Thus, if `reference.foo` is `bar` and `instance.foo` is `baz` , the output will be `{'foo': 'baz'}` . *Note: deep diffs only work for simple non-circular relationships. Improved functionality is scheduled for a future release.* `diff(reference, instance, [fields=None], [exclude=('pk',)], [deep=False], [**kwargs])` Also included is a `Model` subclass which has implements the above functions as methods. ``` from forkit.models import ForkableModel class Author(ForkableModel): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) ``` Let's create starting object: ``` author = Author(first_name='Byron', last_name='Ruth') author.save() ``` To create copy, simply call the `fork` method. `author_fork = author.fork()` When an object is forked, it immediately inherits it's data including related objects. ``` author_fork.first_name # Byron author_fork.last_name # Ruth ``` Let us change something on the fork and use the `diff` method to compare it against the original `author` . It returns a dictionary of the differences between itself and the passed in object. ``` author_fork.first_name = 'Edward' author_fork.diff(author) # {'first_name': 'Edward'} ``` Once satisfied with the changes, simply call `commit` . `author_fork.commit()` For each of the utility function above, `pre_FOO` and `post_FOO` signals are sent allowing for a decoupled approached for customizing behavior, especially when performing deep operations. `sender` - the model class of the instance`reference` - the reference object the fork is being created from`instance` - the forked object itself`config` - a`dict` of the keyword arguments passed into`forkit.tools.fork` `sender` - the model class of the instance`reference` - the reference object the fork is being created from`instance` - the forked object itself `sender` - the model class of the instance`reference` - the reference object the instance is being reset relative to`instance` - the object being reset`config` - a`dict` of the keyword arguments passed into`forkit.tools.reset` `sender` - the model class of the instance`reference` - the reference object the instance is being reset relative to`instance` - the object being reset `sender` - the model class of the instance`reference` - the reference object the instance has been derived`instance` - the object to be committed `sender` - the model class of the instance`reference` - the reference object the instance has been derived`instance` - the object that has been committed `sender` - the model class of the instance`reference` - the reference object the instance is being diffed against`instance` - the object being diffed with`config` - a`dict` of the keyword arguments passed into`forkit.tools.diff` `sender` - the model class of the instance`reference` - the reference object the instance is being diffed against`instance` - the object being diffed with`diff` - the diff between the`reference` and`instance`
true
true
true
**INACTIVE** Adds support for shallow and deep forking (copying) Django model instances. - chop-dbhi/django-forkit
2024-10-13 00:00:00
2011-09-06 00:00:00
https://opengraph.githubassets.com/efd8b433fc539775eecbc60bff336d8024d23f20f6ba1fd9919d1d0b8fad1a4a/chop-dbhi/django-forkit
object
github.com
GitHub
null
null
318,264
http://sciencenow.sciencemag.org/cgi/content/full/2008/925/2
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,994,395
https://stackshare.io/algolia/how-algolia-reduces-latency-for-21b-searches-per-month
How Algolia Reduces Latency For 21B Searches Per Month - Algolia Tech Stack
Algolia
*By Josh Dzielak, Developer Advocate at Algolia.* Algolia helps developers build search. At the core of Algolia is a built-from-scratch search engine exposed via a JSON API. In February 2017, we processed 21 billion queries and 27 billion indexing operations for 8,000+ live integrations. Some more numbers: - Query volume: 1B/day peak, 750M/day average (13K/s during peak hours) - Indexing operations: 10B/day peak, 1B/day average (spikes can be over 1M/s) - Number of API servers: 800+ - Total memory in production: 64TB - Total I/O per day: 3.9PB - Total SSD storage capacity: 566TB We’ve written about our stack before and are big fans of StackShare and the community here. In this post we‘ll look at how our stack is designed from the ground up to reduce latency and the tools we use to monitor latency in production. I’m Josh and I’m a Developer Advocate at Algolia, formerly the VP Engineering at Keen IO. Being a developer advocate is pretty cool. I get to code, write and speak. I also get to converse daily with developers using Algolia. Frequently, I get asked what Algolia’s API tech stack looks like. Many people are surprised when I tell them: **The Algolia search engine is written in C++ and runs inside of nginx.**All searches start and finish inside of our nginx module.**API clients connect directly to the nginx host where the search happens.**There are no load balancers or network hops.**Algolia runs on hand-picked bare metal.**We use high-frequency CPUs like the 3.9Ghz Intel Xeon E5–1650v4 and load machines with 256GB of RAM.**Algolia uses a hybrid-tenancy model.**Some clusters are shared between customers and some are dedicated, so we can use hardware efficiently while providing full isolation to customers who need it.**Algolia doesn’t use AWS or any cloud-based hosting for the API.**We have our own servers spanning 47 datacenters in 15 global regions. #### Why this infrastructure? The primary design goal for our stack is to **aggressively reduce latency**. For the kinds of searches that Algolia powers—suited to demanding consumers who are used to Google, Amazon and Facebook—latency is a UX killer. Search-as-you-type experiences, which have become the norm since Google announced instant search in 2011, have demanding requirements. Any more than 100ms from end-to-end can be perceived as sluggish, glitchy and distracting. But at 50ms or less the experience feels magical. We prefer magic. ## Monitoring Our monitoring stack helps us keep an eye on latency across all of our clusters. We use Wavefront to collect metrics from every machine. We like Wavefront because it’s simple to integrate (we have it plugged in to StatsD and collectd), provides good dashboards, and has integrated alerting. We use PagerDuty to fire alerts for abnormalities like CPU depletion, resource exhaustion and long-running indexing jobs. For non-urgent alerts, like single process crashes, we dump and collect the core for further investigation. If the same non-urgent alert repeats more than a set number of times, we do trigger a PagerDuty alert. We keep only the last 5 core dumps to avoid filling up the disk. When a query takes more than 1 second we send an alert into Slack. From there, someone on our Core Engineering Squad will investigate. On a typical day, we might see as few as 1 or even 0 of these, so Slack has been a good fit. #### Probes We have probes in 45 locations around the world to measure the latency and the availability of our production clusters. We host the probes with 12 different providers, not necessarily the same as where our API servers are. The results from these probes are publicly visible at status.algolia.com. We use a custom internal API to aggregate the large amount of data that probes fetch from each cluster and turn it into a single value per region. #### Downed Machines Downed machines are detected within 30 seconds by a custom Ruby application. Once a machine is detected to be down, we push a DNS change to take it out of the cluster. The upper bound of propagation for that change is 2 minutes (DNS TTL). During this time, API clients implement their internal retry strategy to connect to healthy machines in the cluster, so there is no customer impact. ## Debugging Slow Queries When a query takes abnormally long - more than 1 second - we dump everything about it to a file. We keep everything we need to rerun it including the application ID, index name and all query parameters. High-level profiling information is also stored - with it, we can figure out where time is spent in the heaviest 10% of query processing. A syscall called getrusage analyzes resource utilization of the calling process and its children. For the kernel, we record the number of major page faults (ru_majflt), number of block inputs, number of context switches, elapsed wall clock time (using gettimeofday, so that we don’t skip counting time on a blocking I/O like a major page fault since we’re using memory mapped files) and a variety of other statistics that help us determine the root cause. With data in hand, the investigation proceeds in this order: - The hardware - The software - Operating system and production environment **Hardware** The easiest problem to detect is a hardware issue. We see burned SSDs, broken memory modules and overheated CPUs. We automate the reporting of the most common failures like SSDs by alerting on S.M.A.R.T. data. For infrequent errors, we might need to run a suite of specific tools to narrow down the root cause, like mbw for uncovering memory bandwidth issues. And of course, there is always syslog which logs most hardware failures. Individual machine failures will not have a customer impact because each cluster has 3 machines. Where it’s possible in a given geographical region, each machine is located in a different datacenter and attached to a different network provider. This provides further insulation from network or datacenter loss. **Software** We have some close-to-zero cost profiling information obtained from the getrusage syscall. Sometimes that’s enough to diagnose an issue with the engine code. If not, we need to look to profiling. We can’t run a profiler in production for performance reasons, but we can do this after the fact. An external binary is attached to a profiler, containing exactly the same code as the module running inside of nginx. The profiler uses information obtained by google-perftools, a very accurate stack-sampling profiler, to simulate the exact conditions of the production machine. **OS / Environment** If we can rule out hardware and software failure, the problem might have been with the operating environment at that point in time. That means analyzing system-wide data in the hope of discovering an anomaly. Once we discovered that defragmentation of huge pages in the kernel could block our process for several hundred milliseconds. This defragmentation isn’t necessary because we keep large memory pools like nginx. Now we make sure it doesn’t happen, to the benefit of more consistent latency for all of our customers. ## Deployment Every Algolia application runs on a cluster of 3 machines for redundancy and increased throughput. Each indexing operation is replicated across the machines using a durable queue. Clusters can be mirrored to other global regions across Algolia’s Distributed Search Network (DSN). Global coverage is critical for delivering low latency to users coming from different continents. You can think of DSN like a CDN without caching - every query is running against a live, up-to-date copy of the index. #### Early Detection When we release a new version of the code that powers the API, we do it in an incremental, cluster-aware way so we can rollback immediately if something goes wrong. Automated by a set of custom deployment scripts, the order of the rolling deploy looks like this: - Testing machines - Staging machines - ⅓ of production machines - Another ⅓ of production machines - The final ⅓ of production machines First, we test the new code with unit tests and functional tests on a host that with an exact production configuration. During the API deployment process we use a custom set of scripts to run the tests, but in other areas of our stack we’re using Travis CI. One thing we guard against is a network issue that produces a split-brain partition during a rolling deployment. Our deployment strategy considers every new version as unstable until it has consensus from every server, and it will continue to retry the deploy until the network partition heals. Before deployment begins, another process has encrypted our binaries and uploaded them to an S3 bucket. The S3 bucket sits behind CloudFlare to make downloading the binaries fast from anywhere. We use a custom shell script to do deployments. The script launches the new binaries and then checks to make sure that the new process is running. If it’s not, the script assumes that something has gone wrong and automatically rolls back to the previous version. Even if the previous version also can’t come up, we still won’t have a customer impact while we troubleshoot because the other machines in the cluster can still service requests. ## Scaling For a search engine, there are two basic dimensions of scaling: - Search capacity - how many searches can be performed? - Storage capacity - how many records can the index hold? To increase your search capacity with Algolia, you can replicate your data to additional clusters using the point-and-click DSN feature. Once a new DSN cluster is provisioned and brought up-to-date with data, it will automatically begin to process queries. Scaling storage capacity is a bit more complicated. #### Multiple Clusters Today, Algolia customers who cannot fit on one cluster need to provision a separate cluster and create logic at the application layer to balance between them. This is often needed by SaaS companies who have customers growing at different rates, and sometimes one customer can be 10x or 100x compared to the others, so you need to move that customer to somewhere they can fit. Soon we’ll be releasing a feature that takes this complexity behind the API. Algolia will automatically balance data a customer’s available clusters based on a few key pieces of information. The way it works is similar to sharding but without the limitation of shards being pinned to a specific node. Shards can be moved between clusters dynamically. This avoids a very serious problem encountered by many search engines - if the original shard key guess was wrong, the entire cluster will have to be rebuilt down the road. ## Collaboration Our humans and our bots congregate on Slack. Last year we had some growing pains, but now we have a prefix-based naming convention that works pretty well. Our channels are named `#team-engineering` , `#help-engineering` , `#notif-github` , etc.. The `#team-` channels are for members of a team, `#help-` channels are for getting help from a team, and `#notif-` channels are for collecting automatic notifications. It would be hard to count the number of Zoom meetings we have on a given day. Our two main offices are in Paris and San Francisco, making 7am-10am PST the busiest time of day for video calls. We now have dedicated "Zoom Rooms" with iPads, high-resolution cameras and big TVs that make the experience really smooth. With new offices in New York and Atlanta, Zoom will become an even more important part of our collaboration stack which also includes Github, Trello and Asana. ## Team When you're an API, performance and scalability are customer-facing features. The work that our engineers do directly affects the 15,000+ developers that rely on our API. Being developers ourselves, we’re very passionate about open source and staying active with our community. **We’re hiring!** Come help us make building search a rewarding experience. Algolia teammates come from a diverse range of backgrounds and 15 different countries. Our values are Care, Humility, Trust, Candor and Grit. Employees are encouraged to travel to different offices - Paris, San Francisco, or now Atlanta - at least once a year, to build strong personal connections inside of the company. See our open positions on StackShare. Questions about our stack? We love to talk tech. Comment below or ask us on our Discourse forum. *Thanks to Julien Lemoine, Adam Surak, Rémy-Christophe Schermesser, Jason Harris and Raphael Terrier for their much-appreciated help on this post.*
true
true
true
GitHub, Slack, NGINX, CloudFlare, and Amazon S3 are some of the popular tools that How Algolia Reduces Latency For 21B Searches Per Month uses. Learn more about the Language, Utilities, DevOps, and Business Tools in Algolia's Tech Stack.
2024-10-13 00:00:00
2022-06-29 00:00:00
https://img.stackshare.i…6a40675aba26.png
article
stackshare.io
StackShare
null
null
33,052,392
https://twitter.com/atroyn/status/1576349058725052418
x.com
null
null
true
true
false
null
2024-10-13 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
33,955,013
https://theconvivialsociety.substack.com/p/lonely-surfaces-on-ai-generated-images
Lonely Surfaces: On AI-generated Images
L M Sacasas
*Welcome to the *Convivial Society*, a newsletter about technology and culture. Many of you are receiving your first installment after finding your way here from my conversation with Sean Illing about attention. Welcome aboard. I was grateful for the invitation, and I thoroughly enjoyed the conversation. You can listen to it here or here. While you’re there, check out the recent interviews with Dr. Gabor Maté and Ian Bogost.* *In this installment, I offer some thoughts on AI generated images … finally. I think it was about a month ago that I first mentioned I was working on this post. It took awhile to come together. As per usual, no hot takes contained within. This will be me thinking about what we’re looking at when we’re looking at AI-generated images and how this looking trains our imagination. Or something like that. * This past summer, the image above, titled “Théâtre D’opéra Spatial,” took first prize at the Colorado State Fair. It was created by Jason Allen with Midjourney, an impressive AI tool used to generate images from text prompts. The image won in the division for “digital art/digitally manipulated photography.” It also prompted a round of online debate about the nature of art and its future. Since then you’ve almost certainly seen a myriad of similar AI generated images come across your feed as more and more people gain access to Midjourney and other similar tools such as DALL-E or Stable Diffusion.1 About about month or two ago, on my little corner of the internet, the proliferation of these images seemed to plateau as their novelty wore off. But this does not mean that such tools are merely a passing fad, only that they may already be settling into more mundane roles and functions: as generators of images for marketing campaigns, for example. The debate about the nature and future of art might have happened anyway, but it was undoubtedly encouraged by Allen’s own provocative claims in interviews about his win at the State Fair. They are perhaps best summed up in this line: “Art is dead, dude. It’s over. AI won. Humans lost.” I’m not sure we need to necessarily defend art from such claims. And if we were so inclined, I don’t think it would be of much use to perform the tired litany of rehearsing similar claims about earlier technologies, such as photography or film. Such litanies tend to imply, whether intentionally or not, that nothing changes. Or, better, that all change is merely additive. In other words, that we have simply added something to the complex assemblage of skills, practices, artifacts, tools, communities, techniques, values, and economic structures that constitute what we tend to call art. They fail to understand, as Neil Postman once put it, that technological change is ecological rather than additive. Powerful new tools can restructure the complex techno-social ecosystem we call art in sometimes striking and often unpredictable ways. Even if we don’t think a new tool “kills” art, we should be curious about how it might transform art, or at least some of the skills and practices we have called art. Others might argue in reply to Allen’s rash declaration that this new form *is* art, or maybe that there is an art to the construction and refinement of prompts that yield the desired images. Alternatively, they may argue that this present form of the technology is only one possible application of the underlying capacities, which might be harnessed more cooperatively by human artists. For example, Ethan Zuckerman wrote, “Jason Allen is trolling us by declaring art is dead. Instead, a new way of making art, at the intersection of AI and human skill, is being born.” Some others might even insist, less convincingly in my view, that, in fact, humans win because there is more stuff to go around. If some images are good, then more images are better. If only certain people could develop the skills to draw, paint, or design with digital tools, better to empower everyone with the machine-aided capacity to produce similar work. I’m not sure about any of that. Maybe the proliferation of images will prove alienating. Maybe the alien or hybrid quality of this work will fail to yield the same subjective experience for those who encounter it. Maybe doodling anonymously in notebooks no one will ever see turns out to be more satisfying for some people. Back in September, John Herrman noted that the “flood of machine-generated media” had at least raised the caliber of the discourse around AI: In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction. As Herrman approvingly observed, most of the debates about the ethics of AI-generated art have thus far focused on justice for the artists, both living and dead, on whose work these models are trained and those whose labor might be displaced because of their success. These are legitimate and significant areas of concern. You can follow some of the links in the Herrman block quote above to read more about such matters. I find that my own questions, as they have gradually come to me, are a bit different. I’ve been thinking about matters of depth and also about how these images might train our imagination. Along these lines, I appreciated the reflections of another digital artist, Annie Dorsen.2 “When tinkerers and hobbyists, doodlers and scribblers—not to mention kids just starting to perceive and explore the world—have this kind of instant gratification at their disposal,” Dorsen argues, “their curiosity is hijacked and extracted.” “For all the surrealism of these tools’ outputs,” she adds, “there’s a banal uniformity to the results.” She went on to write that “when people’s imaginative energy is replaced by the drop-down menu ‘creativity’ of big tech platforms, on a mass scale, we are facing a particularly dire form of immiseration.” What exactly does such immiseration entail? Allow me to quote Dorsen at length: By immiseration, I’m thinking of the late philosopher Bernard Stiegler’s coinage, “symbolic misery”—the disaffection produced by a life that has been packaged for, and sold to, us by commercial superpowers. When industrial technology is applied to aesthetics, “conditioning,” as Stiegler writes, “substitutes for experience.” That’s bad not just because of the dulling sameness of a world of infinite but meaningless variety (in shades of teal and orange). It’s bad because a person who lives in the malaise of symbolic misery is, like political philosopher Hannah Arendt’s lonely subject who has forgotten how to think, incapable of forming an inner life. Loneliness, Arendt writes, feels like “not belonging to the world at all, which is among the most radical and desperate experiences of man.” Art should be a bulwark against that loneliness, nourishing and cultivating our connections to each other and to ourselves—both for those who experience it and those who make it. Not surprisingly, I was struck by the reference to Arendt. The *world* for Arendt is not simply coterminous with the earth. It is rather the relatively stable realm of human things that welcome and outlive each generation. It mediates human relationships and anchors our sense of self. Through our participation in the plurality of the common world of things, we enjoy the consolations community. To be alienated from the world is to find ourselves lonely and isolated—and it is to lose ourselves, too. I’m not sure if this is exactly what Dorsen had in mind, but here’s how I would apply this strand of Arendt’s thinking. (Stay with me, it will seem as if I forgot about Arendt, but we’ll get back to her!) I’ll begin by noting that when I first glanced at Allen’s “Théâtre D’opéra Spatial,” I was taken in by the image, which struck me as evocative and intriguing. But as I came back to the image and sat with it for a while, I found that my efforts to engage it at depth were thwarted. This happened when I began to inspect the image more closely. As I did so, my experience of the image began to devolve rather than deepen. When taken whole and at a glance, the image invited closer consideration, but it did not ultimately sustain or reward such attention. This is not only because the image appeared to fail in some technical sense—hands, for example, seem to give these models trouble—it is that these errors, aberrations, or incongruities are, in a literal sense, insignificant—they signify nothing. They may startle or surprise, which is something, but they do not then go on to capitalize on that initial surprise to lead me on to some deeper insight or aesthetic experience. Rob Horning has made a similar observation in his recent comments about generative AI focused on ChatGPT. “AI models,” Horning observes, presume that thought is entirely a matter of pattern recognition, and these patterns, already inscribed in the corpus of the internet, can [be] mapped once and for all, with human ‘thinkers’ always already trapped within them. The possibility that thought could consist of pattern breaking is eliminated. This also hints at how, as I wrote last summer, we seem to be increasingly trapped in the past by what are essentially machines for the storage and manipulation of memory. The past has always fed our capacity to create what is new, of course, but the success of these tools depends on their ability to fit existing patterns as predictably as possible. The point is to smooth out the uncanny aberrations and to eliminate what surprises us. “The best art isn’t about pleasing or meeting expectations,” as Dan Cohen has put it in a recent essay about generative AI. “Instead, it often confronts us with nuance, contradictions, and complexity. It has layers that reveal themselves over time. True art is resistant to easy consumption, and rewards repeated encounters.” On the contrary, Cohen concluded, “The desire of AI tools to meet expectations, to align with genres and familiar usage as their machine-learning array informs pixels and characters, is in tension with the human ability to coax new perspectives and meaning from the unusual, unique lives we each live.” Consider how, in *The Rings of Saturn*, W. G. Sebald interprets Rembrandt’s “The Anatomy Lesson of Dr Nicolaes Tulp.” The dissected arm is all wrong, but this “error,” if we attend to it, leads us on to something vital. It invites a closer consideration of the significance of the scene being depicted, and it rewards such attention with critical insight and depth of meaning. Rather than straightforwardly depicting a step in the grand advance of scientific knowledge, Rembrandt appears to raise a series of questions about the moral standing of the body, the ethics of the procedure, and the nature of vision—the participants have lost sight of the body before them because they have become dependent on its representation in the medical textbook that commands their attention.3 As we are in the midst of what amounts to a series of digressions before we get back to Arendt and loneliness, so let us take one more. In “The Idea of Perfection,” philosopher Iris Murdoch described “uses of words by persons grouped round a common object” as a “central and vital human activity.” What she is aiming at is the importance of developing a wide and diverse vocabulary to support sound moral judgment and showing how that vocabulary depends on a context of common objects of attention, but she gets us there by analogy to the art critic. “The art critic,” she explains, “can help us if we are in the presence of the same object and if we know something about his scheme of concepts. Both contexts are relevant to our ability to move towards ‘seeing more’, towards ‘seeing what he sees’.” And so there is a place for the critic or historian who can, as we gather around “The Anatomy Lesson,” help us to see what is before us. Along with the formal aesthetic features of the painting, there are historical, legal, and social dynamics in play that we may not be able to perceive. While there is room for errors of judgment with regard to interpretation, it is meaningful to say that we can be moved by such conversations toward a deeper understanding of the meaning and significance of the painting. It would be difficult for me to imagine such a conversation taking shape around “Théâtre D’opéra Spatial.” Now, I am prepared to grant that it is I who am missing something of consequence or that this conclusion merely reflects a failure of my imagination. If so, please correct me. It seems to me that one may discuss the technical aspects of the technologies that are yielding these images or how certain features of the image might have appeared or for the artist to explain the process by which they arrived at the prompts that yielded the image. This would be not unlike talking exclusively about the shape of the brush or the chemical composition of the paint. It does not seem to me that we can talk about the image in the way that we could talk about “The Anatomy Lesson” and find that we are moving toward a deeper understanding of the image in the same way. In part, this is because we cannot properly speak about the intentions of the artist or seek to make sense of an embedded order of meanings without making what I think would be a category error. I think I can begin to tie these threads together by reference to a few lines from Eva Brann, a long-time tutor at St. John’s College, who, in a talk to incoming freshman introducing the program of great books, observed the following: To my mind texts, like people, are serious when they have a surface that arouses the desire to know them and the depth to fulfill that desire. I think that for us human beings only depths and mysteries induce viable desire. Many a failure of love follows on the—usually false—opinion that we have exhausted the other person’s inside, that there is no further promise of depth. I left the last sentence in because it’s worth thinking about independently of our present subject, but it’s this line that I’d like us to consider: “A surface that arouses the desire to know them and the depth to fulfill that desire.” That line has stuck with me over the years. And I thought about it again as AI-generated images proliferated on my screens, and especially as I thought about Allen’s work and his rash pronouncements about art. By contrast, I recently found myself looking again at Pieter Bruegel’s “Harvesters” on Google’s Arts and Culture site. You will recognize the image as the one that I use as the header art for this newsletter. Bruegel is one of my favorite painters, chiefly for what I take to be his extraordinarily humane and earthy vision. The resolution of the image on Arts and Culture is extremely high, and you can zoom in to see minute details in the painting. When doing so, I was struck by this scene from the deep background of the image: The detail is remarkable. This scene appears in the field slightly to the left of the painting’s center. You could look at this painting for a long time without noticing it. And, of course, that’s much easier to do when looking at a lower resolution image appearing on your screen than it would be were we to be standing in front of the painting itself, although even these fine details would only begin to emerge over time.4 This is one way of thinking about what it means for a work of art to have depth. You can press in, and it won’t dissolve under a more attentive gaze. Naturally, what it means to “press in,” as I put it, varies depending on the medium under consideration. In this case, it means looking intently until our looking is transformed into seeing. But I can imagine analogous modes of pressing in that would apply for music and text, for example, or in the case of taste and texture. Whatever mode these engagements take, they involve attention—Iris Murdoch’s “just and loving gaze directed upon an individual reality.” I suppose, then, that these are the sorts of questions I have for us just now as we navigate the flood of machine-generated media: How will AI-generated images train our vision? What habit of attention does it encourage? What modes of engagement do they sustain? The most important thing about a technology is not necessarily what can be done with it in singular instances, it is rather what habits its use instills in us and how these habits shape us over time. I recently wrote about how the skimming sort of reading that characterizes so much of our engagement with digital texts (and which often gets transferred to our engagement with analog texts) arises as a coping mechanism for the overwhelming volume of text we typically encounter on any given day. So, likewise, might we settle for a scanning sort of looking, one that is content to bounce from point to point searching but never delving thus never quite seeing. We are, it seems, offered an exchange. Brann wrote about works that have a “a surface that arouses the desire to know them and the depth to fulfill that desire.” This suggests that there are surfaces that may arouse a desire to know more deeply but which do not have the depth to satisfy that desire. I think this is where we find ourselves with AI-generated art. And, at one level, this is fine, unless we find ourselves conditioned to never expect depth at all or unable to perceive it when we do encounter it. The problem, as I see it, is that we need these encounters with depth of meaning to sustain us, indeed, to do more than sustain us, to elevate our thinking, judgment, and imagination. So the exchange we are offered is this: in place of occasional experiences of depth that renew and satisfy us, we are simply given an infinite surface upon which to skim indefinitely. But let us not forget Arendt! Dorsen, you’ll recall, argued that “when people’s imaginative energy is replaced by the drop-down menu ‘creativity’ of big tech platforms” they suffer a form of symbolic immiseration. Loneliness for Arendt, she noted, “feels like ‘not belonging to the world at all, which is among the most radical and desperate experiences of man.’ Art should be a bulwark against that loneliness, nourishing and cultivating our connections to each other and to ourselves—both for those who experience it and those who make it.” The lack of depth, as I’ve called it following Brann, ultimately issues forth in a kind of loneliness. When I turn to Bruegel or Rembrandt, what I find, whether or not I am fully conscious of it, is not merely technical virtuosity, it is another mind. To encounter a painting or a piece of music or poem is to encounter another person, although it is sometimes easy to lose sight of this fact. I can ask about the meaning of a work of art because it was composed by someone with whom I have shared a world and whose experience is at least partly intelligible to me. Without reducing the meaning of a work of art to the intention of its creator, I can nonetheless ask and think about such intentions. In putting a question to a painting, I am also putting a question to another person. It is for this reason, I think, that Dorsen argues that art can be a bulwark against the loneliness of finding that we do not belong to the world at all. “Friendship,” C. S. Lewis wrote, “is born at that moment when one person says to another, ‘What! You too? I thought I was the only one.’” That moment, I’d argue, can happen through the mediation of a work of art just as surely as it can in conversation with my neighbor. At least as long as the ratio of human to machine intentionality, perhaps difficult to ascertain in practice, is not such that the human is altogether obscured. For what it’s worth, one of the better descriptions I’ve encountered of how these applications work was provided by Marco Donnarumma, who is himself a digital artist and a machine learning researcher. “Figuratively speaking,” Donnarumma explained, “AI image generators create a cartography of a dataset, where features of images and texts (in the form of mathematical abstractions) are distributed at particular locations according to probability calculations.” “The cartography,” he goes on to say, “is called a ‘manifold’ and it contains all the image combinations that are possible with the data at hand. When a user prompts a generator, this navigates the manifold in order to find the location where the relevant sampling features lie.” Thanks to Neil Turkewitz for bringing this piece to my attention. I don’t think he ever cites this painting, but this is a striking illustration of Ivan Illich’s argument about a regime of vision in thrall of what he called “the show.” He traced its origins back to “the anatomists looked for new drawing methods to eliminate from their tables the perspectival ‘distortion’ that the natural gaze inevitably introduces into the view of reality.” You can read more on Illich and “the show” in this older installment. I’ve cited her essay before, but I’ll mention it here again. In 2013, art historian Jennifer Roberts wrote about helping her students more wisely deploy their attention: “An awareness of time and patience as a productive medium of learning is something that I feel is urgent to model for—and expect of—my students.” And, as she observed, “in any work of art there are details and orders and relationships that take time to perceive.” "So the exchange we are offered is this: in place of occasional experiences of depth that renew and satisfy us, we are simply given an infinite surface upon which to skim indefinitely." This is such a brilliant encapsulation of the challenge of living in our technological age, and promise offered by technology in so many areas: dependable yet unsatisfying. So much of life-worth-living comes like a surprise during the 'waiting for', the boredom, the mundane. edited Jan 18, 2023I had a thought regarding the quality of these tools. We have certainly not seen the peak in their abilities, but I would argue the peak may be not too far away. The reason is fairly simple: in the future, what will the AI's have left to be trained from? Today, all of these language and image models are trained from existing human art and communication. But as people begin and continue to integrate AI outputs into their work and daily lives, the content on the web will increasingly be reflective of the AIs themselves. Eventually they will be heavily influencing or directly creating nearly all online artifacts. As this process continues, the training data available with which to create and refine AIs will begin to form a feedback loop. The fundamental question is: is this feedback loop one of exponential decay or exponential growth? And, is there a limit? In a game like chess or Go, where AIs don't need human signal but can instead compete against algorithms or themselves, exponential growth (with limits) is both possible and demonstrated. However, I believe this scenario is the opposite and I therefore fail to see how the quality of the models could do anything but decay. This is because the individuals who depend on the AI will become increasingly unable to be coherent without them, effectively removing themselves as relevant training data for how to improve human understanding. Of course, the AIs will always be able to be trained from an impressive trove of archived data, but with no feedback loop I wonder how many of the technologist dreams are even possible. Perhaps more realistically, the legal hurdles of appropriating other's works may actually _require_ new sources of corporate owned data for many use-cases -- data which will become increasingly impossible to find once people are dependent on their AI tools.
true
true
true
The Convivial Society: Vol. 3, No. 20
2024-10-13 00:00:00
2022-12-10 00:00:00
https://substackcdn.com/…f8_1998x1470.png
article
substack.com
The Convivial Society
null
null
18,631,333
https://blog.kingofpops.com/when-the-next-best-thing-is-actually-the-best-thing-a-lesson-from-facebook-apple-microsoft-and-...-king-of-pops
King of Pops
null
### Pop Subscriptions $105.0036 of the world's best pops delivered to you (or a loved one) We work hard to be more than a dessert company. Our purpose is to make the world a better place by creating Unexpected Moments of Happiness (we call them UMOHs). Sometimes we do that with amazing pops, and sometimes there are no pops to be found. It’s a pleasure to meet you… we’re glad you’re here. After selling millions of pops, at thousands of events and managing hundreds of carts and slingers across the country, we're focused on building out an amazing franchise network. We have 14 years of experience building relationships, cultivating community, having fun and making money with a simple business you can be proud of. Check out our Cart Map for more info. Pop Subscripitons or one-off orders are available on our E-Comm site Rainbowprovisions.com.
true
true
true
King of Pops All-Natural, Hand Crafted Pops can be found in push carts across the South and in select retailers. Locate one of our iconic carts with our rainbow umbrella or book us to come to your next event. Once you fall in love we are always looking for folks to join our Cartrepreneur® Franchise Program.
2024-10-13 00:00:00
2024-01-01 00:00:00
null
null
kingofpops.com
King of Pops
null
null
18,229,037
https://developers.tron.network/
TRON Developer Hub
null
### Introduction ### TRON Protocol ### Token Standards ### DApp Development Guide ### Cross-Chain ### Oracle ### Decentralized Exchanges ### Community Projects Copyright © 2017-2022 TRON Network Limited. | All rights reserved. Welcome to the TRON developer hub. You'll find comprehensive guides and documentation to help you start working with TRON as quickly as possible, as well as support if you get stuck. Let's jump right in! Copyright © 2017-2022 TRON Network Limited. | All rights reserved.
true
true
true
null
2024-10-13 00:00:00
2018-09-10 00:00:00
https://files.readme.io/…all-icon_red.png
null
tron.network
TRON Developer Hub
null
null
6,721,151
http://guru8.net/2013/11/vine-launches-on-windows-phone-2/
Vine Launches on Windows Phone
Boses Muhinda
The twitter, video app has made its way to the windows Phone platform. The 6 second video sharing app for twitter has beaten Facebook’s Instagram to make it to Windows phone officially. Vine which has an estimated 40 million users as of August will definitely see its users grow after the move to windows phone. Will the abscent of Instagram on windows phone help vine? Yes and No because users who migrated fro android to windows phone already have instagram accounts, however some users will prefer using native apps and hence use the Official vine app to share shot videos. Lets remember that Instagram is majorly a photo sharing application and it introduced Video sharing 4 months after Vine did. So the question has to be answered by the user, whether 6 seconds is better than 15 seconds. Vine has live tiles and properly intergrates with your windows phone camera so as to keep you within the app. Now windows phone users put their Cameras to good use by getting the Vine app here
true
true
true
The twitter, video app has made its way to the windows Phone platform. The 6 second video sharing app for twitter has beaten Facebook’s Instagram to make it to Windows phone officially. Vine which has an estimated 40 million users as of August will definitely see its users grow after the move to windows phone. […]
2024-10-13 00:00:00
2013-11-12 00:00:00
https://137.135.209.182/…/vine-camera.png
article
guru8.net
GURU8
null
null
13,970,237
http://www.anandtech.com/show/11227/intel-launches-optane-memory-m2-cache-ssds-for-client-market
Intel Launches Optane Memory M.2 Cache SSDs For Consumer Market
Billy Tallis
# Intel Launches Optane Memory M.2 Cache SSDs For Consumer Market by Billy Tallis*on March 27, 2017 12:00 PM EST* - Posted in - SSDs - Storage - Intel - SSD Caching - M.2 - NVMe - 3D XPoint - Optane - Optane Memory Last week, Intel officially launched their first Optane product, the SSD DC P4800X enterprise drive. This week, 3D XPoint memory comes to the client and consumer market in the form of the Intel Optane Memory product, a low-capacity M.2 NVMe SSD intended for use as a cache drive for systems using a mechanical hard drive for primary storage. The Intel Optane Memory SSD uses one or two single-die packages of 3D XPoint non-volatile memory to provide capacities of 16GB or 32GB. The controller gets away with a much smaller package than most SSDs (especially PCIe SSD) since it only supports two PCIe 3.0 lanes and does not have an external DRAM interface. Because only two PCIe lanes are used by the drive, it is keyed to support M.2 type B and M slots. This keying is usually used for M.2 SATA SSDs while M.2 PCIe SSDs typically use only the M key position to support four PCIe lanes. The Optane Memory SSD will not function in a M.2 slot that provides only SATA connectivity. Contrary to some early leaks, the Optane Memory SSD uses the M.2 2280 card size instead of one of the shorter lengths. This makes for one of the least-crowded M.2 PCBs on the market even with all of the components on the top side. The very low capacity of the Optane Memory drives limits their usability as traditional SSDs. Intel intends for the drive to be used with the caching capabilities of their Rapid Storage Technology drivers. Intel first introduced SSD caching with their Smart Response Technology in 2011. The basics of Optane Memory caching are mostly the same, but under the hood Intel has tweaked the caching algorithms to better suit 3D XPoint memory's performance and flexibility advantages over flash memory. Optane Memory caching is currently only supported on Windows 10 64-bit and only for the boot volume. Booting from a cached volume requires that the chipset's storage controller be in RAID mode rather than AHCI mode so that the cache drive will not be accessible as a standard NVMe drive and is instead remapped to only be accessible to Intel's drivers through the storage controller. This NVMe remapping feature was first added to the Skylake-generation 100-series chipsets, but boot firmware support will only be found on Kaby Lake-generation 200-series motherboards and Intel's drivers are expected to only permit Optane Memory caching with Kaby Lake processors. Intel Optane Memory Specifications | ||| Capacity | 16 GB | 32 GB | | Form Factor | M.2 2280 single-sided | || Interface | PCIe 3.0 x2 NVMe | || Controller | Intel unnamed | || Memory | 128Gb 20nm Intel 3D XPoint | || Typical Read Latency | 6 µs | || Typical Write Latency | 16 µs | || Random Read (4 KB, QD4) | 300k | || Random Write (4 KB, QD4) | 70k | || Sequential Read (QD4) | 1200 MB/s | || Sequential Write (QD4) | 280 MB/s | || Endurance | 100 GB/day | || Power Consumption | 3.5 W (active), 0.9-1.2 W (idle) | || MSRP | $44 | $77 | | Release Date | April 24 | Intel has published some specifications for the Optane Memory drive's performance on its own. The performance specifications are the same for both capacities, suggesting that the controller has only a single channel interface to the 3D XPoint memory. The read performance is extremely good given the limitation of only one or two memory devices for the controller to work with, but the write throughput is quite limited. Read and write latency are very good thanks to the inherent performance advantage of 3D XPoint memory over flash. Endurance is rated at just 100GB of writes per day, for both 16GB and 32GB models. While this does correspond to 3-6 DWPD and is far higher than consumer-grade flash based SSDs, 3D XPoint memory was supposed to have vastly higher write endurance than flash and neither of the Optane products announced so far is specified for game-changing endurance. Power consumption is rated at 3.5W during active use, so heat shouldn't be a problem, but the idle power of 0.9-1.2W is a bit high for laptop use, especially given that there will also be a hard drive drawing power. Intel's vision is for Optane Memory-equipped systems to offer a compelling performance advantage over hard drive-only systems for a price well below an all-flash configuration of equal capacity. The 16GB Optane Memory drive will retail for $44 while the 32GB version will be $77. As flash memory has declined in price over the years, it has gotten much easier to purchase SSDs that are large enough for ordinary use: 256GB-class SSDs start at around the same price as the 32GB Optane Memory drive, and 512GB-class drives are about the same as the combination of a 2TB hard drive and the 32GB Optane Memory. The Optane Memory products are squeezing into a relatively small niche for limited budgets that require a lot of storage and want the benefit of solid state performance without paying the full price of a boot SSD. Intel notes that Optane Memory caching can be used in front of hybrid drives and SATA SSDs, but the performance benefit will be smaller and these configurations are not expected to be common or cost effective. The Optane Memory SSDs are now available for pre-order and are scheduled to ship on April 24. Pre-built systems equipped with Optane Memory should be available around the same time. Enthusiasts with large budgets will want to wait until later this year for Optane SSDs with sufficient capacity to use as primary storage. True DIMM-based 3D XPoint memory products are on the roadmap for next year. Source: Intel ## 127 Comments ## View All Comments ## Eden-K121D - Monday, March 27, 2017 - link I'm confused. Can someone explain how this would work and what benefits would occur in real world usage ?## Billy Tallis - Monday, March 27, 2017 - link As with any cache, data that is frequently or recently used can be accessed more quickly from the cache device than from the larger, slower device. Intel's hope is that ordinary desktop usage is mostly confined to a relatively small data set: the OS, a few commonly-used applications, and some documents.When accessing data that fits in the cache, you'll get SSD-like performance. If you launch a program that isn't in the cache, it'll still be hard drive slow (assuming the cache backing device is a hard drive, of course). Sequential accesses don't have a lot of reason to use the cache and are probably excluded by Intel's algorithms to save cache space for random I/O. ## saratoga4 - Monday, March 27, 2017 - link You can make a large magnetic hard drive faster by adding an external cache. For people who can't afford a large enough SSD, this might be a good choice. SSDs are getting cheap though, so this feels like a product that needed to ship a few years earlier to have a real chance.## Gothmoth - Monday, March 27, 2017 - link it feels like a product searching for a reason to exist.if i need fast performance i buy a SDD that delivers 2 GB/s and not a cache device that delivers 1200 mb/s. ## BurntMyBacon - Monday, March 27, 2017 - link @GothmothKeep in mind that consumer NVMe SSDs that boast throughput of 2GB/s or more generally do not reach their peak at low queue depth. Optane is supposed to be able to drive 1200MB/s read throughput at low queue depth (not sure why they listed QD4), so there is potential for some performance improvement here. Most consumer workloads never get out of low queue depth territory, so this could have some small real world benefit. Write throughput, however, is critically low. More importantly, these Optane drive are gear more towards lowering latency than transferring large files. Where HDDs access the data on the order of 10s of mS and SSDs access data on the order of 1mS (give or take), Optane should be able to access data on the order of 1s - 10s of uS. Where Optane will be useful is high numbers of small file accesses (DLLs, library files, etc.). That all said, I'd just as soon leave all the extra complications, compatibility issues, and inconsistencies on the table and get that 2 GB/s sdd that you mentioned until Intel figures out how to make these more compatible and easier to use without requiring a "golden setup". I don't want to buy a new W10, Kaby Lake, 200 series based system just to use one of these. My current W7/W10/Ubuntu, Skylake, 100 series system should work just fine for a good while yet. ## Sarah Terra - Monday, March 27, 2017 - link Anyone remember intel turbo cache? This looks to be nearly the same thing, kind of a let down.## BrokenCrayons - Monday, March 27, 2017 - link I recall it being released with the 965 chipset and offering little to no benefit to the end user. In fact, I think HP and a few other OEMs didn't bother supporting it. Turbo Memory's disappointing performance is one of the reasons why I think Optane is better used as a higher endurance replacement for NAND flash SSDs than as a cache for now progressively less common conventional hard drives.## Byte - Monday, March 27, 2017 - link Maybe it will find a way into Intels SSDs and replace the SLC cache with the Optane with is much bigger and higher performance.## beginner99 - Tuesday, March 28, 2017 - link That would actually be pretty reasonable product compared to this.## ddriver - Tuesday, March 28, 2017 - link SLC is MUCH better than hypetane. Double the endurance, 1/100 the latency. It will we a big step back to replace SLC cache with xpoint.What the industry should really do is go back to SLC in 3D form. Because it doesn't look like xpoint has a density advantage either, as it is already 3D and it takes 28 chips for the measly 448GB. Samsung 960 pro has 2 TB in 4 chips. Sure that's MLC, which is twice as dense as SLC. Meaning that with 3D SLC you could have a terabyte in 4 chips. Now, if you get short of 0.5 TB of xpoint with 28 chips, and you get 1 TB of much faster, durable and overall better SLC with 4 chips, that means it would take like 60 chips to get a TB with xpoint. Making potential 3D SLC "ONLY" 15 TIMES better in terms of density, while still offering superior performance and endurance. Which begs the question, why the hell is intel pushing this dreck??? My guess, knowing the bloated obese lazy spoiled brat they are, they put a shameful amount of money into RDing it, and how they are hyping the crap out of it in order to get some returns. They most likely realized its inferiority way back, which prompted them to go for the hype campaign, failing to realize despite (or because of) their brand name, that would do more harm than good the moment it fails to materialize. Which it did - I mean look at how desperate they are at trying to find a market for this thing. Time to add Hypetane to "handheld SOC" in the "intel's grandiose failures" category. The downsides of being a bloated monopolist - you are too full of yourself and too slow to react to a changing market to offer adequate solutions.
true
true
true
null
2024-10-13 00:00:00
2017-03-27 00:00:00
https://images.anandtech…b_lr_678x452.jpg
article
anandtech.com
AnandTech
null
null
12,951,497
https://triskell.github.io/2016/11/08/Machine-Learning-explained-to-my-girlfriend.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,056,456
https://medium.com/@dougmill/how-to-choose-a-beginner-programming-language-47cdc5e1b95b#.jl1qcvbm0
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,438,818
https://mikhail.kissine.web.ulb.be/papers/Kissine2012b.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
15,006,833
http://newatlas.com/fermi-paradox-alien-probability/50876/
Are we alone? Statistical analysis suggests that if we are typical, then the answer is probably yes
David Szondy
If there are other civilizations in the Universe, then why, after 60 years of listening and looking, haven't we found any evidence of their existence? According to Daniel Whitmire, retired astrophysicist at the University of Arkansas, this may be because there's no one out there to find. Using statistical analysis, Whitmire concludes that, if Earth is typical, then it isn't possible for any other technological civilizations to exist at the same time as us. There are 1024 stars in the Universe with who knows how many planets revolving about them. With so many worlds where life might evolve to choose and with over 13 billion years to do the evolving, it seems reasonable that there must be many other civilizations out there that are far more advanced than ours. The trouble is, there isn't a single piece of solid evidence that they exist. In the 1950s, the Italian physicist Enrico Fermi did some back of the envelope sums and asked, "Where is everybody?" Fermi used some extremely conservative assumptions about hypothetical ETs and calculated that even the most lackadaisical and outright lazy civilization would have long ago not only contacted, but reached and colonized every inhabitable planet in the galaxy. For over six decades, the Fermi Paradox has puzzled scientists, with SETI researchers referring to it as the Great Silence. Why the silence? Over the years, many reasons have been given, ranging from the idea that no one is interested in contacting us to paranoid conspiracy theories that earthbound or cosmic authorities are engaged in a massive cover up. However, the simplest explanation is that the reason we can't find other civilizations is that they aren't there. Whitmire's position that if the statistical concept called the principle of mediocrity is applied to Fermi's Paradox, this produces the reason we are alone, which is that we are a typical civilization and will go extinct soon now that we are capable of interstellar communications. The principle of mediocrity is one of the basic assumptions of modern physics and cosmology in particular. Basically, it states that there is nothing special about our corner of the universe, our planet, or our species. This means that we can, for example, look at how gravity works here and assume that it works exactly the same 10 billion light years away. Whitmire's argument is that the view that we are an unusually young and unusually primitive technological species is wrong. But we are the first technological species to appear on Earth, taking 60 million years to evolve from the proto-primates with no evidence of any preceding tech species. Since the Earth will be able to support life for another billion years, that means that the planet could, potentially, produce 23 more species like us. The important point is that we've only been capable of sending messages to the stars for a little over a century after the invention of radio. Whitmire found that if he assumed that humans are typical rather than exceptional, then the bell curve produced by statistical analysis places us in the middle of 95 percent of all civilizations and that ones that are millions of years old are statistical outliers with a very low probability of existence. In other words, if the human race is typical, then because we are a young technological species that's the first on our planet and have only been around for about a century, then the same is typical of all other civilizations. Worse, if we are to remain typical, the human race will probably die out and soon. This means that other civilizations are biological creatures, not machines, are the first to appear on their planet, and are only around for a couple of centuries before being destroyed. Once those first civilizations die out, the planet's biosphere is so compromised that no other technological species arise to replace them. Sorry, Dr Zaius. Since this is a statistical result, standard deviation is involved. In this case, it's about two hundred years and if the fact that the curve skews older is taken into account, it comes out to 500 years. Whitmire says that even if an assumption other than a bell curve is used, the results are similar. Whitmire's calculations are depressing not only in regard to ETs, but also to ourselves, since they suggest that ours is a very short-lived species and we'll take out everything else on the planet as we leave. One consolation is that, since we only have a sample of one, the longer we stick around, the longer we'll stick around. If we run the numbers in a thousand years, then the predicted lifespan is 5,000 years. If we're here in a million years, then the prediction is five million years. But Whitmire admits that there is another conclusion. "If we're not typical then my initial observation would be correct," says Whitmire. "We would be the dumbest guys in the galaxy by the numbers." The study was published in the *International Journal of Astrobiology*. Source: University of Arkansas
true
true
true
If there are other civilizations in the Universe, then why, after 60 years of listening and looking, haven't we found any evidence of their existence? According to Daniel Whitmire, retired astrophysicist at the University of Arkansas, this may be because there's no one out there to find. Using…
2024-10-13 00:00:00
2017-08-14 00:00:00
https://assets.newatlas.com/dims4/default/40cb422/2147483647/strip/true/crop/1440x756+0+162/resize/1200x630!/quality/90/?url=http%3A%2F%2Fnewatlas-brightspot.s3.amazonaws.com%2Farchive%2Ffermi-statistics-1.jpg
article
newatlas.com
New Atlas
null
null
17,359,349
https://www.wired.com/2012/12/why-do-rocks-melt-volcano/
Why do Rocks Melt on Earth, Anyway?
Erik Klemetti
I get a lot of questions here at *Eruptions*, but one of the more common themes is the properties of rocks - and specifically why they melt where they melt to produce magma? There are a lot of misconceptions out there about the interior of the Earth, namely that the tectonic plates that we make our home (both the continental and oceanic kinds) are sitting on a "sea of magma" that makes up the mantle. As I've said before, the mantle of the Earth, that layer of silicate rocks that starts at ~10-70 km depth and goes down to the outer core at ~2900 km depth that constitutes a large volume of the planet, is *not* molten, but rather a solid that can behave plastically. This means it can flow and convect, which is one of the ways that geologists have theorized that plate motion is started and sustained. However, as we know, rocks are found entirely molten within the Earth, so how can so much of the planet be solid but then some parts of it melt as well? It starts with the question "how do you melt a rock"? The most straightforward way that might pop into you head is "raise the temperature!". That is what happens with ice -- it is solid water that melts when the temperature exceeds 0ºC/32F. However, when it comes to rocks, we run into a problem. The Earth actually isn't really hot enough to melt mantle rocks, which are the source of basalt at the mid-ocean ridges, hotspots and subduction zones. If we assume the mantle that melts is made of peridotite*, the solidus (the point where the rock starts to melt) is ~2000ºC at 2o0 km depth (in the upper mantle). Now, models for the geothermal gradient (how hot it gets with depth; see above) on Earth as you go down through the crust into the upper mantle pegs the temperature at 200 km at somewhere between 1300-1800ºC, well below the melting point of peridotite. So, if it is cooler as you head up, why does this peridotite melt to form basalt? Well, that is where you need to stop thinking about how to heat the rock to melting but rather how to change the rock's melting point (solidus). Think about our ice analogy. During the winter, there are a lot of times where you'd like to get rid of that ice but the ambient temperature is below the air temperature. So, what do you do? One solution is to get that ice to melt at a lower temperature by disrupting the bonding between the H2O molecules -- thus, halting the formation of rigid ice. Salts are a great way to disrupt this, so throw some NaCl or KCl on ice and it will melt at a lower temperature than 0ºC. For a rock, water behaves as its salt. Add water into a mantle peridotite and it will melt at a lower temperature because the bonds in the minerals that make up the rock will be disrupted by the water molecule (we call it a "network modifier"). In a subduction zone (like the Cascades or the Andes), where an oceanic plate slides down under another plate, that downgoing slab releases its water as it heats up. That water then rises up into the mantle above it, causing it to melt at a lower temperature and, bam! Basalt is produced in the process called *flux melting.* Wait! The largest volcanic system on Earth is the mid-ocean ridge system, where you don't have any subduction to bring water down into the mantle to help melting along. Now, why do you get basalt there? This time we have to use another method to melt that peridotite - we need to decompress it at constant temperature. This is called *adiabatic* ascent. The mantle is convecting, bringing hot mantle from depth up towards the surface and as it does so, the mantle material stays hot, hotter than the surrounding rocks. The melting point (solidus) of peridotite changes with pressure, so the 2000ºC melting point at 200 km is only ~1400ºC at 50 km. So, keep that mantle material hot and decompress it and you get melting to form basalt! So, underneath mid-ocean ridges (and at hotspots like Hawaii), the mantle is upwelling, causing *decompression melting* to occur. Let's review: Under normal conditions, mantle rock like peridotite shouldn't melt in the Earth's upper mantle -- it is just too cool. However, by adding water you can lower the melting point of the rock. Alternatively, by decompressing the rock, you can bring it to a pressure where the melting point is lower. In both cases, basalt magma will form and considering it is hotter and less dense than the surrounding rock, it will percolate towards the surface ... and some of that erupts! *The mantle is definitely not homogenous, but for our purposes, we're interested in what we call "fertile mantle" -- that is, mantle that hasn't experienced melting before and can produce basaltic liquid.
true
true
true
I get a lot of questions here at Eruptions, but one of the more common themes is the properties of rocks – and specifically why they melt where they melt to produce magma? There are a lot of misconceptions out there about the interior of the Earth, namely that the tectonic plates that we make […]
2024-10-13 00:00:00
2012-12-19 00:00:00
https://media.wired.com/…ediaFile-375.jpg
article
wired.com
WIRED
null
null
11,802,993
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
Docker and the PID 1 zombie reaping problem
Hongli Lai
When building Docker containers, you should be aware of the PID 1 zombie reaping problem. That problem can cause unexpected and obscure-looking issues when you least expect it. This article explains the PID 1 problem, explains how you can solve it, and presents a pre-built solution that you can use: Baseimage-docker. *When done, you may want to read part 2: Baseimage-docker, fat containers and "treating containers as VMs".* ## Introduction About a year ago -- back in the Docker 0.6 days -- we first introduced Baseimage-docker. This is a minimal Ubuntu base image that is modified for Docker-friendliness. Other people can pull Baseimage-docker from the Docker Registry and use it as a base image for their own images. We were early adopters of Docker, using Docker for continuous integration and for building development environments way before Docker hit 1.0. We developed Baseimage-docker in order to solve some problems with the way Docker works. For example, Docker does not run processes under a special init process that properly reaps child processes, so that it is possible for the container to end up with zombie processes that cause all sorts of trouble. Docker also does not do anything with syslog so that it's possible for important messages to get silently swallowed, etcetera. However, we've found that a lot of people have problems understanding the problems that we're solving. Granted, these are low-level Unix operating system-level mechanisms that few people know about or understand. So in this blog article we will describe the most important problem that we're solving -- the PID 1 problem zombie reaping problem -- in detail. We figured that: - The problems that we solved are applicable to *a lot*of people. - Most people are not even aware of these problems, so things can break in unexpected ways (Murphy's law). - It's inefficient if everybody has to solve these problems over and over. So in our spare time we extracted our solution into a reusable base image that everyone can use: Baseimage-docker. This image also adds a bunch of useful tools that we believe most Docker image developers would need. We use Baseimage-docker as a base image for all our Docker images. The community seemed to like what we did: we are the most popular third party image on the Docker Registry, only ranking below the official Ubuntu and CentOS images. ## The PID 1 problem: reaping zombies Recall that Unix processes are ordered in a tree. Each process can spawn child processes, and each process has a parent except for the top-most process. This top-most process is the init process. It is started by the kernel when you boot your system. This init process is responsible for starting the rest of the system, such as starting the SSH daemon, starting the Docker daemon, starting Apache/Nginx, starting your GUI desktop environment, etc. Each of them may in turn spawn further child processes. Nothing special so far. But consider what happens if a process terminates. Let's say that the bash (PID 5) process terminates. It turns into a so-called "defunct process", also known as a "zombie process". Why does this happen? It's because Unix is designed in such a way that parent processes must explicitly "wait" for child process termination, in order to collect its exit status. The zombie process exists until the parent process has performed this action, using the waitpid() family of system calls. I quote from the man page: "A child that terminates, but has not been waited for becomes a "zombie". The kernel maintains a minimal set of information about the zombie process (PID, termination status, resource usage information) in order to allow the parent to later perform a wait to obtain information about the child." In every day language, people consider "zombie processes" to be simply runaway processes that cause havoc. But formally speaking -- from a Unix operating system point of view -- zombie processes have a very specific definition. They are processes that have terminated but have not (yet) been waited for by their parent processes. Most of the time this is not a problem. The action of calling `waitpid()` on a child process in order to eliminate its zombie, is called "reaping". Many applications reap their child processes correctly. In the above example with sshd, if bash terminates then the operating system will send a SIGCHLD signal to sshd to wake it up. Sshd notices this and reaps the child process. But there is a special case. Suppose the parent process terminates, either intentionally (because the program logic has determined that it should exit), or caused by a user action (e.g. the user killed the process). What happens then to its children? They no longer have a parent process, so they become "orphaned" (this is the actual technical term). And this is where the init process kicks in. The init process -- PID 1 -- has a special task. Its task is to "adopt" orphaned child processes (again, this is the actual technical term). This means that the init process becomes the parent of such processes, even though those processes were never created directly by the init process. Consider Nginx as an example, which daemonizes into the background by default. This works as follows. First, Nginx creates a child process. Second, the original Nginx process exits. Third, the Nginx child process is adopted by the init process. You may see where I am going. The operating system kernel automatically handles adoption, so this means that the kernel expects the init process to have a special responsibility: **the operating system expects the init process to reap adopted children too**. This is a very important responsibility in Unix systems. It is such a fundamental responsibility that many many pieces of software are written to make use of this. Pretty much all daemon software expect that daemonized child processes are adopted and reaped by init. Although I used daemons as an example, this is in no way limited to just daemons. Every time a process exits even though it has child processes, it's expecting the init process to perform the cleanup later on. This is described in detail in two very good books: Operating System Concepts by Silberschatz et al, and Advanced Programming in the UNIX Environment by Stevens et al. ### Why zombie processes are harmful Why are zombie processes a bad thing, even though they're terminated processes? Surely the original application memory has already been freed, right? Is it anything more than just an entry that you see in `ps` ? You're right, the original application memory has been freed. But the fact that you still see it in `ps` means that it's still taking up some kernel resources. I quote the Linux waitpid man page: "As long as a zombie is not removed from the system via a wait, it will consume a slot in the kernel process table, and if this table fills, it will not be possible to create further processes." ### Relationship with Docker So how does this relate to Docker? Well, we see that a lot of people run only one process in their container, and they think that when they run this single process, they're done. But most likely, this process is not written to behave like a proper init process. That is, instead of properly reaping adopted processes, it's probably expecting another init process to do that job, and rightly so. Let's look at a concrete example. Suppose that your container contains a web server that runs a CGI script that's written in bash. The CGI script calls grep. Then the web server decides that the CGI script is taking too long and kills the script, but grep is not affected and keeps running. When grep finishes, it becomes a zombie and is adopted by the PID 1 (the web server). The web server doesn't know about grep, so it doesn't reap it, and the grep zombie stays in the system. This problem applies to other situations too. We see that people often create Docker containers for third party applications -- let's say PostgreSQL -- and run those applications as the sole process inside the container. You're running someone elses code, so can you really be sure that those applications *don't* spawn processes in such a way that they become zombies later? If you're running your own code, and you've audited all your libraries and all their libraries, then fine. But in the general case you *should* run a proper init system to prevent problems. ### But doesn't running a full init system make the container heavyweight and like a VM? An init system does not have to be heavyweight. You may be thinking about Upstart, Systemd, SysV init etc with all the implications that come with them. You may be thinking that full system needs to be booted inside the container. None of this is true. A "full init system" as we may call it, is neither necessary nor desirable. The init system that I'm talking about is a small, simple program whose only responsibility is to spawn your application, and to reap adopted child processes. Using such a simple init system is completely in line with the Docker philosophy. ### A simple init system Is there already an existing piece of software that can run another application and that can reap adopted child processes at the same time? There is **almost** a perfect solution that everybody has -- it's plain old bash. Bash reaps adopted child processes properly. Bash can run anything. So instead having this in your Dockerfile... ``` CMD ["/path-to-your-app"] ``` ...you would be tempted to have this instead: ``` CMD ["/bin/bash", "-c", "set -e && /path-to-your-app"] ``` (The -e directive prevents bash from detecting the script as a simple command and `exec()` 'ing it directly.) This would result in the following process hierarchy: But unfortunately, this approach has a key problem. It doesn't handle signals properly! Suppose that you use `kill` to send a SIGTERM signal to bash. Bash terminates, but does *not* send SIGTERM to its child processes! When bash terminates, the kernel terminates the entire container with all processes inside. These processes are terminated *uncleanly* through the SIGKILL signal. SIGKILL cannot be trapped, so there is no way for processes to terminate cleanly. Suppose that the app you're running is busy writing a file; the file could get corrupted if the app is terminated uncleanly in the middle of a write. Unclean terminations are bad. It's almost like pulling the power plug from your server. But why should you care whether the init process is terminated by SIGTERM? That's because `docker stop` sends SIGTERM to the init process. "docker stop" should stop the container cleanly so that you can start it later with "docker start". Bash experts would now be tempted to write an EXIT handler that simply sends signals to child processes, like this: ``` #!/bin/bash function cleanup() { local pids=`jobs -p` if [[ "$pids" != "" ]]; then kill $pids >/dev/null 2>/dev/null fi } trap cleanup EXIT /path-to-your-app ``` Unfortunately, this does not solve the problem. Sending signals to child processes is not enough: the init process must also *wait* for child processes to terminate, before terminating itself. If the init process terminates prematurely then all children are terminated uncleanly by the kernel. So clearly a more sophisticated solution is required, but a full init system like Upstart, Systemd and SysV init are overkill for lightweight Docker containers. Luckily, Baseimage-docker has a solution for this. We have written a custom, lightweight init system especially for use within Docker containers. For the lack of a better name, we call this program my_init, a 350 line Python program with minimal resource usage. Several key features of my_init: - Reaps adopted child processes. - Executes subprocesses. - Waits until all subprocesses are terminated before terminating itself, but with a maximum timeout. - Logs activity to "docker logs". ### Will Docker solve this? Ideally, the PID 1 problem is solved natively by Docker. It would be great if Docker supplies some builtin init system that properly reaps adopted child processes. But as of January 2015, we are not aware of any effort by the Docker team to address this. This is not a criticism -- Docker is very ambitious, and I'm sure the Docker team has bigger things to worry about, such as further developing their orchestration tools. The PID 1 problem is very much solvable at the user level. So until Docker has officially solved this, we recommend people to solve this issue themselves, by using a proper init system that behaves as described above. ## Is this *really* such a problem? At this point, the problem might still sound hypothetical. If you've never seen any zombie processes in your container then you may be inclined to think that everything is all right. But the only way you can be sure that this problem never occurs, is when you have audited all your code, audited all your libraries' code, and audited all the code of the libraries that your libraries depend on. Unless you've done that, there *could* be a piece of code somewhere that spawns processes in such a way that they become zombies later on. You may be inclined to think, I've never seen it happen, so the chance is small. But Murphy's law states that when things *can* go wrong, they *will* go wrong. Apart from the fact that zombie processes hold kernel resources, zombie processes that don't go away can also interfere with software that check for the existence of processes. For example, the Phusion Passenger application server manages processes. It restarts processes when they crash. Crash detection is implemented by parsing the output of `ps` , and by sending a 0 signal to the process ID. Zombie processes are displayed in `ps` and respond to the 0 signal, so Phusion Passenger thinks the process is still alive even though it has terminated. And think about the trade off. To prevent problems with zombie processes from ever happening, all you have to do is to is to spend 5 minutes, either on using Baseimage-docker, or on importing our 350 lines my_init init system into your container. The memory and disk overhead is minimal: only a couple of MB on disk and in memory to prevent Murphy's law. ## Conclusion So the PID 1 problem is something to be aware of. One way to solve it is by using Baseimage-docker. Is Baseimage-docker the only possible solution? Of course not. What Baseimage-docker aims to do is: - To make people aware of several important caveats and pitfalls of Docker containers. - To provide pre-created solutions that others can use, so that people do not have to reinvent solutions for these issues. This means that multiple solutions are possible, as long as they solve the issues that we describe. You are free to reimplement solutions in C, Go, Ruby or whatever. But why should you when we already have a perfectly fine solution? Maybe you do not want to use Ubuntu as base image. Maybe you use CentOS. But that does not stop Baseimage-docker from being useful to you. For example, our `passenger_rpm_automation` project uses CentOS containers. We simply extracted Baseimage-docker's `my_init` and imported it there. So even if you do not use, or do not want to use Baseimage-docker, take a good look at the issues we describe, and think about what you can do to solve them. Happy Dockering. **There is a part 2**: We will discuss the phenomenon that a lot of people associate Baseimage-docker with "fat containers". Baseimage-docker is not about fat containers at all, so what is it then? See Baseimage-docker, fat containers and "treating containers as VMs"
true
true
true
When building Docker containers, you should be aware of the PID 1 zombie reaping problem. That problem can cause unexpected and obscure-looking issues when you least expect it. This article explains the PID 1 problem, explains how you can solve it, and presents a pre-built solution that you can use:
2024-10-13 00:00:00
2015-01-20 00:00:00
https://blog.phusion.nl/…2015/02/boat.jpg
article
phusion.nl
Phusion Blog
null
null
26,079,203
https://www.bbc.com/news/business-55997641
Brexit worse than feared, says JD Sports boss
null
# Brexit worse than feared, says JD Sports boss **The boss of one of Britain's big retailers says Brexit has turned out to be "considerably worse" than he feared.** Peter Cowgill, chairman of JD Sports, said the red tape and delays in shipping goods to mainland Europe meant "double-digit millions" in extra costs. He told the BBC JD Sports may open an EU-based distribution centre to ease the problems, which would mean creating jobs overseas and not in the UK. The government said it is providing businesses with support. Mr Cowgill's criticism echoes that from exporters and importers across the UK. New UK-EU trade rules came into operation on 1 January. But since then, there has been growing concern from businesses as diverse as seafood exporters from Scotland and food suppliers shipping products to Northern Ireland. Mr Cowgill told the BBC's World at One that there is no true free trade with the EU, because goods that JD Sports imports from East Asia incur tariffs when they go to its stores across Europe. He said: "I actually think it was not properly thought out. All the spin that was put on it about being free trade and free movement has not been the reality. "The new system and red tape just slows down efficiency. The freedom of movement and obstacles are quite difficult at the moment. I don't see that regulatory paperwork easing much in the short term," Mr Cowgill said. ## 'Bizarre' Opening a big warehouse distribution centre in mainland Europe "would make a lot of economic sense," he said. He estimated such a facility would employ about 1,000 people. While JD Sports' existing warehouse in Rochdale would not close, "it would mean the transfer of a number of jobs into Europe," Mr Cowgill said. He also warned that the UK needed a complete overhaul of business rates and rents if the High Street was to survive. "It is basic economics," he said. "Bricks and mortar retailing is becoming uneconomic." Mr Cowgill had particular criticism for the government's decision-making on forcing non-essential shops to close, while allowing essential shops to stay open. In reality, that meant supermarkets could sell clothes, while firms such as JD Sports had to shut. "Some essential retailers have been making hay out of selling clothes, whilst clothing retailers have been closed. It is bizarre," he said. The Cabinet Office said in a statement: "We know that some businesses are facing challenges with specific aspects of our new trading relationship, and that's why we are operating export helplines, running webinars with experts and offering businesses support via our network of 300 international trade advisers. 'We will ensure businesses get the support they need to trade effectively with Europe and to seize new opportunities as we strike trade deals with the world's fastest growing markets and explore our newfound regulatory freedoms." ## 'Erroneous' Last weekend, the Road Haulage Association (RHA) said exports to the EU had fallen as much as 68% since 1 January due to Brexit border hold-ups. But the government has said freight movements are now close to normal levels, despite the Covid-19 pandemic. On Monday, Cabinet Office Minister Michael Gove told a committee of MPs that the RHA's claims were "erroneous" and "based on a partial survey". He added that "truer figures" were published on the Cabinet Office website, adding that the port of Dover saw 90% of normal levels of traffic on Monday. Mr Gove acknowledged that traders faced issues with exports and imports to and from the EU, but said it was "important to put it in context".
true
true
true
Chairman Peter Cowgill says the new EU trade deal has meant "double-digit millions" in extra costs.
2024-10-13 00:00:00
2021-02-09 00:00:00
https://ichef.bbci.co.uk….sports.ox.g.jpg
reportagenewsarticle
bbc.com
BBC News
null
null
38,642,577
https://mail.openjdk.org/pipermail/amber-spec-experts/2023-December/003959.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
32,176,258
https://lwn.net/SubscriberLink/901744/7dfe1c82ab2f7059/
Leaving python-dev behind
Jake Edge July
# Leaving python-dev behind Did you know...?LWN.net is a subscriber-supported publication; we rely on subscribers to keep the entire operation going. Please help out by buying a subscription and keeping LWN on the net. It was not all that long ago that Python began its experiment with replacing one of its mailing lists with a forum on its Discourse discussion site. Over time, the Discourse instance has become more and more popular within the Python community. It would seem that another mailing list will soon be subsumed within Discourse as the Python steering council is planning to effectively retire the venerable python-dev mailing list soon. #### History Back in 2018, both Fedora and Python were experimenting with Discourse instances; both are quite active at this point. Discourse is an open-source web forum project that says it aims to "reimagine what a modern Internet discussion forum should be ". But Fedora and Python currently still have mailing lists as well. *today*, in a world of ubiquitous smartphones, tablets, Facebook, and Twitter As part of the experiment for Python, the core-developer-only python-committers mailing list was switched to Discourse for a few months as a test. That was during the upheaval in the Python world that stemmed from Guido van Rossum's resignation as its benevolent dictator for life. The announcement of the experiment was deemed a bit of an overreach at the time, but the discussion of the new governance model for the language did largely happen in the Committers forum on Discourse. These days, the python-committers list still exists, but it mostly receives announcements and the like; some discussions still take place there, but most, it would seem, are done on the Discourse site. There are plenty of other forums on that site, including two that overlap the main mailing lists where development discussions take place. The Core Development forum overlaps the function of the python-dev mailing list, while the Ideas forum serves the same purpose as the python-ideas mailing list. Up until recently, there was no real indication that changes might be in the works, but an April query about PEP discussion that was posted to python-dev by Victor Stinner may have been the first real public indication that changes were afoot. He asked that new PEPs be announced on python-dev since he did not go to Discourse often and several PEPs of interest had slipped by without notice because they failed to be posted there. He also noted that it is "sometimes hard to keep track of everything happening around Python development " in part because of all of the different ways the project's developers communicate: The discussions are scattered between multiple communication channels:Sometimes, I [am] already confused by the same topic being discussed in two different Discord rooms :-) It's also common that some people discuss on the issue, and other people have a parallel discussion (about the same topic) on the related pull request. - Issues - Pull requests - python-dev - python-committers - (private) Discord - Discourse - (public) IRC #python-dev He noted that there are some in-person events, too, that make it even harder to keep up for those who cannot attend. Petr Viktorin replied that PEPs should be posted to python-dev, "but not necessarily right after they're published ". They may be discussed elsewhere before coming to python-dev, for example. The intent, Viktorin said, is that PEPs should only be submitted to the steering council, of which he is a member, "after all relevant discussion took place ", which includes python-dev. #### Choosing one Jean Abou Samra noted that the split in the location for discussions is confusing to him, so he asked about any plans "to retire either Discourse or the mailing list and use a unified communication channel ". Gregory P. Smith, who is also a steering council member, replied: We feel it too. We've been finding Discourse more useful from a community moderation and thread management point of view as well as offering markdown text and code rendering. Ideal for PEP discussions. Many of us expect python-dev to wind up obsoleted by Discourse as a result. That led to some predictable grumbling about Discourse and the problems with following a web-based forum in comparison to a mailing list. That divide comes up whenever changes of this sort are announced or discussed, but newer developers generally seem to be uninterested in learning the "joys" of participating on mailing lists. In part, that is because the relevance of email as a communication mechanism has fallen almost completely off the radar for many. But the writing seems to be on the wall—for Python at least. As Christopher Barker put it: But if Discourse has been adopted, I guess it's time for us curmudgeons to bite the bullet and start monitoring it -- and frankly, this list (and python-ideas) should probably be retired, or turned into an announcement-only list -- having the current split is the worst option of all. For discussions on the development of CPython, the python-dev mailing list has been the place to go for two decades or more. The mailing list page has archives going back to April 1999. In fact, one of the earliest messages archived is from Van Rossum asking whether the python-dev archives should be public or private. Obviously, "public" was the decision. On July 15, Viktorin posted a message that would seem to be bringing that history to a close. On behalf of the council, he said: The discuss.python.org experiment has been going on for quite a while, and while the platform is not without its issues, we consider it a success. The Core Development category is busier than python-dev. According to staff, discuss.python.org is much easier to moderate.. If you're following python-dev but not discuss.python.org, you're missing out.The Steering Council would like to switch from python-dev to discuss.python.org. His message recognized that not everyone finds the Discourse forum software running at discuss.python.org to be easy to follow and use either; it contained several suggestions for alternate ways to interact with it, including "mailing-list mode". The message was also soliciting feedback on whether a permanent switch would "pose an undue burden to anyone ". A final decision on the switch had not been made, so the council wants to ensure that it is "aware of all the impact ". No one has really raised any concrete problems of that nature, though Barry Warsaw mentioned the possibility of "accessibility or native language concerns " for Discourse. He also noted that he supports moving to Discourse, which "might seem odd coming from me "; Warsaw is one of the lead developers of the GNU Mailman mailing-list manager system and was a big part of the Mailman 3 effort. "Discourse is not without its issues, but then again, the same can be said about email. " While there were posts with the usual negative opinions of web forums versus email, they were fairly muted. To an extent, it would seem that there is a generational change going on in the Python community; the older developers are either adapting, perhaps via mailing-list mode, or kind of just bowing out. For those looking for more information, mailing-list mode is briefly mentioned in the "Following Python's Development" section of the Python Developer's Guide. But mailing-list mode is not able to disguise one problem that Discourse discussions have: no threading. Ethan Furman said: I follow each (sub)thread through to it's end, as it keeps a logical flow, but Discourse has everything linear which means that as I read it the conversation keeps jumping around, making it hard to follow. Warsaw agreed that the lack of threading is problematic, but that feature has fallen by the wayside in today's discussions: [...] I definitely prefer threaded discussions. Unfortunately though, much like top posting <wink>, I think that horse is out of the barn, what with other forums like GitHub being linear. As might be guessed, based on the "wink", Warsaw had top-posted his reply. Viktorin noted that he has just had to accept the linear nature of Discourse discussions; things have changed and there is likely no going back: [...] if python-dev was used by everyone, rather than almost exclusively by people who prefer e-mail (and presumably use threading mail clients), we'd get mangled threading anyway from all the non-threaded clients.I mean, I could grumble about threading and bottom-posting and plain-text messages and IRC all day, but realistically, I'm not likely to convince anyone who's not into those things already. That's where things stand at this point. It seems likely that a final decision to switch away from python-dev will be coming soon and that the venerable mailing list will be reconfigured sometime thereafter, "eventually switching to auto-reject incoming messages with a pointer to discuss.python.org ". That will be a sad day for some—and effectively a non-event for (many?) others. For those of us who cut our teeth on threaded, text-only, bottom-posted discussions, it is completely mind-boggling that Kids These Days (tm) do not see the advantages of such ... discourse—but that seems to be the way of things. Index entries for this article | | ---|---| Python | Development model | Posted Jul 20, 2022 16:17 UTC (Wed) by Posted Jul 20, 2022 16:42 UTC (Wed) by Posted Jul 21, 2022 2:07 UTC (Thu) by Posted Jul 20, 2022 17:05 UTC (Wed) by There is space for everything - but only insofar as it's not ephemeral. Posted Jul 20, 2022 17:15 UTC (Wed) by Posted Jul 20, 2022 17:53 UTC (Wed) by IRC logs are good and useful as long as they're kept - and can also be grepped, of course. Fora and discussion tools are significantly less long-lived: Slack/Mattermost/Matrix/Discord?? and that's not to think of various other services built on XMPP that were commercialised or have just vanished. Digital dark ages, anybody? Anything unthreaded - Heaven forbid. And no, there are some other areas where HTML and top-posting haven't caught on - supercomputing and the Beowulf list remains one of my favourites for focus and technical excellence. Posted Jul 20, 2022 19:06 UTC (Wed) by This is an important point. Mailing list message and IRC logs are really easy for people to archive on their own workstations, and they don't take up a ton of space by modern disk drive standards. My workplace, for instance, uses Slack, but I use it via an IRC gateway that logs everything. If I want to search for something, it is significantly faster for me to grep the logs on my IRC gateway box than to use Slack's search feature. I can also look for regexes or even do more complicated searches by whipping up a script. Posted Jul 20, 2022 20:19 UTC (Wed) by It wouldn't be terribly hard to build archiving tools meant to preserve conversations in a way that is useful offline and does not depend on the forum software itself. Posted Jul 21, 2022 2:07 UTC (Thu) by Discourse makes its content easily searchable; non-javascript users get a readable, full-text dump of all the content instead of dynamic scrolling, and that's the view that is presented to search engines and script-blocking users. It's fine! When I ported lots of Google+ communities into a new Discourse instance, search engines found the content very quickly and gave relevant search results. Google's bot has a relatively light impact on site load compared to the other bots (although Bingbot has improved recently), while still indexing new content quickly. Besides the ".json", just crawling with a non-browser User-Agent will produce the full, non-incremental-scrolling view, which makes it even easier to create an archive. The sitemap (just add sitemap.xml to the base forum URL for the complete sitemap) would be an easy place to start, other than XML being the standard for sitemap... Posted Jul 21, 2022 12:58 UTC (Thu) by OK, that's cool. I've never used Discourse, but it sounds like decent software. Posted Jul 22, 2022 1:24 UTC (Fri) by Posted Jul 22, 2022 5:28 UTC (Fri) by Discourse's mailing list mode is not perfect – the biggest issue is that it doesn't notify you when someone edits their post – but it's good enough for most purposes. (I wish there was a standard for editable email, for Discourse and similar software to use. It could just consist of a header that means "this email is a revision to the email with such-and-such Message-ID". Email clients supporting the standard would default to showing only the latest version of each email, but would have an option to show old versions. Old clients would just see all the revisions as separate messages.) Posted Jul 23, 2022 14:08 UTC (Sat) by Posted Jul 20, 2022 17:18 UTC (Wed) by I think email has gone the way of punch cards, and we are going through the usual 'planned' obsolescence of the older Techie generation. I remember multiple long lectures about how punch cards made for better programming because you had to think about what you were going to write maybe days in advance. I also thought to myself, why in the world would I ever want to spend days redoing punch cards because I decided to rewrite a routine when I could just open a terminal and type it out. I like email, I find it useful for my way of thinking. I also think its current form has an expiration date printed on it with not-so-future retirement stamped on it. Posted Jul 20, 2022 17:53 UTC (Wed) by Until a new fully-federated alternative is widely deployed, I don't see that happening, because email _still_ backstops, well, everything out there because it's the only communication channel everyone can rely on being there. And any upstart federated communication solution is going to have to deal with the same problems that has made email so 'horrible' -- inevitably requiring the equivalent of robust filtering tools, trustworthiness scores for senders, etc etc. Oh, and no advertising/engagement-driven algorithms deciding what you should or shouldn't see. Posted Jul 20, 2022 19:49 UTC (Wed) by > Until a new fully-federated alternative is widely deployed, I don't see that happening, because email _still_ backstops, well, everything out there because it's the only communication channel everyone can rely on being there. I used to buy into that but in the last 5 years I have seen most of our new accounts coming from or fronted by gmail.com and distant second microsoft.com. The long tail of email servers being run by sysadmins get eaten up constantly by 'lets just outsource this to ...' At a certain point, there is only an illusion of 'fully federated' with those of us trying to keep it going dieing off over time. And yes any new replacement will have to deal with all the problems that SMTP has solved.. but I have also come to the conclusion that the human brain enjoys reinventing the wheel more than learning why the last wheel failed. Nurse has come and told me its time for my apple sauce.. so I am going to yell at the kids on that side of the home's lawn now. Posted Jul 20, 2022 20:33 UTC (Wed) by You inadvertantly made a critical point -- with email, you actually can choose who/what hosts it, and at any time, suck your data out and switch over to someone else's solution [1] because ultimately it's tied to the domain, not the provider. [2] It's not entirely friction-free, but critically you don't need the cooperation of your existing provider. Try that with pretty much any other would-be replacement. [1] Unless you've been using Exchange. *shudder* Posted Jul 20, 2022 20:43 UTC (Wed) by It's not federated, but, honestly, I think domain-name-based federation is a half-assed approach to data portability anyway. Domain names are rented for a limited time, so keeping a mailing list (which is tied to a domain name) running requires a chain of custody for a centralized legal organization or individual person to pay that bill, which is not a good fit for ad-hoc open source communities made up of five people living in three different countries. Anyone that won't or can't set up their own domain name instead leases one on "someone else's land," and since the age of a mail service is a good predictor for how long it'll live, people pick already-established players because they hope the past predicts the future. This, of course, cements the dominance of already dominant players. In other words, I don't see the point of accepting all the complexity and the inertia introduced by "federation" if you aren't going to federate identities. Or, better yet, go for real P2P. Posted Jul 22, 2022 14:18 UTC (Fri) by Posted Jul 25, 2022 16:06 UTC (Mon) by Getting email right is in reality really difficult. Subtle things can topple the entire stack, how many can afford that with a thing they really use? "Not entirely friction-free" is an understatement in this context. I'm not saying it's not possible, it absolutely is. But the unfortunate nuance here is that it has been *made* difficult. Just just like people find using e-mail archaic at times, hosting it is that times ten for no good reason. I've heard "oh but it's because email is complex", but that phrasing I don't agree with. If we take one of the most basic things, TLS, we can really see the stark contrast. In the "web ecosystem" we've got multiple webservers with ACME clients built-in, in the email world there's one (maybe two) projects like that and they're not finished. Point being, now imagine a full mail server that can get its own TLS certificates, update its own DANE, MTA-STS, DKIM (ARC) or even A and SPF records. Totally doable, but there's a resistance or an attitude that hinders most improvements (QoL included). One can feel this resistance well once you do become a mailop. There are large email providers (e.g. Deutsche Telekom) that intentionally doesn't publish a SPF record, wow what "fun" it is to filter spam from @t-online.de. It's just one example of the mindset, there are many more and it has gotten us the monstrosity that is e-mail at this point in time. It could be much better, I really wish it were, but unfortunately right now I can only see monopolistic megacorporations really pushing things further. Posted Jul 25, 2022 9:54 UTC (Mon) by But both are US companies. And there is some trend in Europe on avoiding those due to GDPR, such as https://tutanota.com/blog/posts/dutch-schools-must-stop-u... or https://usefathom.com/blog/6monthsjail . I doubt it is going to grow fast or much (as the force that push to outsource are here to stay), but I also doubt Google will be able to fully comply with GDPR because a lot depend on the US government and their own business model. And from a purely political point of view, I think the war in Ukraine and the sanctions from USA (and Europe) showed to a lot of countries that you shouldn't count too much on external technologies, so the push to not use US based SaaS is here to stay. The trade war between USA and others countries is IMHO also fresh in people minds, and the Trump presidency reshaped perceptions of risk wrt USA for at least a few more years (assuming nothing egregious happen by 2024 on that front). I think that's also renewing interests in initiatives such as https://www.ngi.eu/ and I guess similar across the world (all those headlines on China, India pushing for their own risc-v processor, etc) Posted Jul 25, 2022 10:58 UTC (Mon) by My employer is a UK multinational (linkedin if you want to know who :-) and although we use Google Mail, I suspect it's physically and legally located in the EU or Britain. Certainly, BigQuery has both European and US data silos AND THE TWO CAN'T TALK TO EACH OTHER. Causes me some grief as the company is migrating its legacy silos (US-based) to European silos, and as my job involves using both datasets, it's not pleasant ... :-) Cheers, Posted Jul 23, 2022 18:21 UTC (Sat) by Yes, things like Matrix and Mastodon exist. But in the Real World, everyone is on Discord, Slack, Facebook, and Twitter, and this is only going to get worse as the proprietary services add features and optimize for "engagement." The new federation apps will slowly wither and die like their forefathers, and then someone will invent some new ones and the cycle will repeat again. I would really like to be wrong about this, but unfortunately this is what I think is going to happen to every federated app in the future. There's only room in the tech world for one federated protocol, and it's HTTP. Posted Jul 24, 2022 0:51 UTC (Sun) by No, take away email and the only remaining federated messaging protocol is bog-stock SMS. And maybe the postal system. HTTP is just a dumb data transport with a completely non-standard namespace (and no inherent semantic meaning for anything built on top of it) Posted Jul 24, 2022 6:44 UTC (Sun) by Pardon me, by the "tech world" I actually meant "the internet." > HTTP is just a dumb data transport with a completely non-standard namespace (and no inherent semantic meaning for anything built on top of it) Which is the reason it has endured. It's so unopinionated that it's easier to simply bend it to your will than to try and replace it. See for example the browser makers deciding they wanted more stuff in HTML, and forming the WHATWG when the W3C didn't want to play ball. But once you have such a malleable protocol, why would you need a second one? Posted Jul 20, 2022 21:17 UTC (Wed) by For that matter, it seems like it would be easy and wise to have the forum replicated on multiple sites running different hosting software by encoding messages using RFC822 style to transfer them. Posted Jul 20, 2022 17:12 UTC (Wed) by Posted Jul 21, 2022 22:37 UTC (Thu) by Posted Jul 20, 2022 19:57 UTC (Wed) by It effectively means "subscribe me to ALL OF THE LISTS". In fact, individually subscribing to a particular category or tag* is more like subscribing to an individual list. * this is a whole different discussion, but Fedora, we eventually decided to go category-light and focus on tags for organization, and, rather than arbitrary infinite tags, basically treat tags as if each is an individual team or subject mailing list conceptually. Posted Jul 20, 2022 20:27 UTC (Wed) by In Discourse, each "topic" — the flat un-threaded thing that other forum software generally calls a "thread" actually _does_ keep track of replies, and handles quoting nicely. It just keeps the display in a chronological order. It also has the concept of relationships _between topics_, and you can reply to a post in a topic _as a new topic_. (And, moderators can select posts — including "this post and its replies" — and move them to a new topic.) Having read, participated in, dealt with, and etc., many big long threaded discussions, for example, Fedora devel, I've gradually become convinced that while the threaded model feels organized and helpful, it is Actually Kind of Terrible in practice. It's easy to derail the whole thing, have multiple repeated conversations in different places, loop around, and end up with long branches that are just two people going back and forth. And sometimes those tangents (whether two people breaking something down, or entirely off-topic) are still valuable. Having them be linked topic is really better. Posted Jul 21, 2022 14:07 UTC (Thu) by Posted Jul 22, 2022 14:23 UTC (Fri) by Posted Jul 21, 2022 5:50 UTC (Thu) by Posted Jul 21, 2022 8:11 UTC (Thu) by That's really the wrong question. The real question is: why have Greg Beards (tm) so attached to email (resp. IRC) constantly and consistently dismissed its problems as not important and email as near perfect; just write a small procmail or patchwork script and "problem solved". Most people enjoy or at least don't mind threading and bottom-posting, however these advantages are not enough to outweigh email's issues. Email is not dying because people don't see its advantages, it's dying because they mind its drawbacks. You win by paying attention to your own flaws and working on them, not by obsessing about the competition's flaws. That's why email is dying: because its fans still try hard not to see what's wrong with it. > with other forums like GitHub being linear. Github _discussions_ (not issues) are threaded. The tab is not enabled by default, it's a project setting. Posted Jul 21, 2022 9:21 UTC (Thu) by Because what the youngsters forget is that change gets harder with age. I stick with email - it's what I learnt when I started out. And as I approach retirement age, I DON'T WANT to have to learn some new-fangled system that breaks my workflow, screws me over, and generally makes life a pain. And with a disabled wife and elderly in-laws, I see this even more strongly in them - in fact so much so that you can change the words "I don't want to" to "I can't". And I seriously mean CAN'T, not "don't want to"! Cheers, Posted Jul 21, 2022 10:39 UTC (Thu) by Posted Jul 21, 2022 11:52 UTC (Thu) by ... that's their problem. Posted Jul 21, 2022 12:00 UTC (Thu) by Getting to contribute to a free software project isn't a privilege. Having people contribute to your free software project is a privilege. If you fail to attract new contributors then that's very much not their problem, it's your problem. Posted Jul 21, 2022 12:03 UTC (Thu) by Cheers, Posted Jul 21, 2022 12:46 UTC (Thu) by Posted Jul 21, 2022 14:11 UTC (Thu) by Eh, calling receiving contributions a privilege is a pretty major stretch. A lack of 3rd-party contributors is pretty much entirely their problem, only they don't realize it yet. Posted Jul 21, 2022 12:33 UTC (Thu) by It's only their problem if they want to contribute to your project, but are put off by your antiquated communication methods. It becomes your problem if you want their contributions (e.g. because the current contributor pool is ageing out naturally, or because you believe that your project solves a problem better than any new attempt to solve it could), but they're not willing to work with you. So depends whether you want your project to outlive you or not; if it's OK for it to die with you, then it's their problem. If you want it to keep going long after you've become unable to operate a computer, you need to attract a new generation of contributors. Basically, choose your consequences: do you want the project to keep going or die off when your generation can't contribute any more? Posted Jul 21, 2022 14:12 UTC (Thu) by Long-standing projects have changed version control systems (in many cases multiple times), changed toolchains, changed language standards, and lots of other major 'breaking' changes which required contributors to adapt to the new thing. I think it'd be quite rare to find a popular (large contributor community) project that has existed for more than ten years which is still using all of the same tools and workflows as when it started with zero changes. Posted Jul 21, 2022 14:33 UTC (Thu) by I suspect because the old guard are like me: they have complex scripts and customizations that mean that an e-mail workflow suits them just fine - they've adapted mutt, written procmail scripts, maybe even their own milters, to make e-mail smooth. If you insisted that the old guard started from scratch with e-mail, with none of their customizations and just a free webmail account, they'd find it every bit as challenging as the new guard do. The difference is that when I started out, most online communication was via e-mail; sorting out a decent e-mail workflow for my personal life was essential to avoid having thousands of unread e-mails per day hit my inbox. Now, my personal life is almost entirely handled away from e-mail, via per-site notifications or apps, and e-mail (if used at all) just sends me an irregular prompt to check notifications if there's a site I've forgotten I use. And even better, GMail (at least) already has sort functionality that distinguishes those notification prompts from personal mail - so there's no need for me to learn how to write my own workflow rules for e-mail. Posted Jul 28, 2022 15:03 UTC (Thu) by You seem to take it for granted that one can reasonably customise the stream of ones incoming messages. With web-based fora no-one has any customisations because there it is an unreasonable amount of work to attempt anything and it risks breaking at the drop of hat. For all it's flaws, the one great thing with e-mail is that one has a reasonable-ish way to munge it so ones pains with e-mail are smoothed over. With web-based fora ? There is not much option: don't like it, don't contribute. > sorting out a decent [...] workflow for my personal life was essential I would say the above is still true, but the way things are going either you put up with whatever workflow is presented or you go live a reclusive life in a cave. Sounds more like a step backward to me, but I'm not exactly part of the "new guard"... Posted Jul 28, 2022 15:32 UTC (Thu) by The difference is that most web-based fora I use already support the sorts of workflows I want; if they don't, I don't participate (and a project that needs me, or people like me, but that uses a bad web app does not get my contributions in any form). E-mail's "bare" workflow is awful - one firehose of notifications, sorted by the time at which my receiving mail server got the mail. Nobody works directly with this and retains a useful level of sanity; everyone who uses e-mail seriously has built a workflow on top of this that works for them, with sorting, searching and filtering to remove junk, bring important things to the top of mind, and to move unimportant things off to one side for later handling. The trouble is that outside of my open source work, I have literally no reason to build such a workflow atop e-mail any more; my notifications are sorted for me by the originating site, and I can apply a crude filter by simply not going to the "wrong" site; if I don't go to github.com, I don't see GitHub notifications. E-mail has been reduced to just a way to remind me to visit sites and pick up my notifications. And this leads to trouble for e-mail based workflows. If I'm fully plugged into the current way of doing things, with different notifications on facebook.com, pinterest.com, github.com etc, and e-mail only set up to remind me which sites I haven't checked recently, moving to an e-mail based workflow requires a significant investment of time and effort. Worse, if I get involved but This puts us in a difficult position; if you can get the newcomer to invest in sorting out an e-mail based workflow, There are two routes to resolving this: Both of these are perfectly reasonable choices; the one that's not going to work is insisting on an e-mail based workflow, and then getting upset that your contributor pool is limited to the people willing to set up an e-mail based workflow. There are a few projects that are big enough to do that (Linux kernel comes to mind), but the majority are not - and it is unreasonable to complain that people aren't coming to your project while also keeping a big barrier to entry in their way. Choose one - a big barrier to entry (e-mail based workflows), or an expectation that people will contribute to your project. Posted Jul 28, 2022 16:02 UTC (Thu) by So: Thank You! Posted Jul 30, 2022 20:35 UTC (Sat) by Oh, and to work around the common flaws of all these sites they use… *drumroll* … email. So in the end you still depend on email. Posted Jul 31, 2022 12:30 UTC (Sun) by > So in the end you still depend on email. Even if it were true, it would still better than a mailing list that forces you to receive all discussions on it and filter them somehow. PS: I feel like I fed the troll. Posted Jul 31, 2022 12:37 UTC (Sun) by Quite ironic for the free-software community to rely on proprietary infrastructure like that. :( Posted Jul 31, 2022 13:22 UTC (Sun) by GitHub is proprietary. Gitlab CE (Community Edition) which is what a lot of the free software projects like say GNOME use isn't proprietary and is hosted in their own community infrastructure. Posted Jul 31, 2022 14:47 UTC (Sun) by Ah but you can't just do a "drive-by" contribution with the "community infrastructure" -- you'll still need to register for an account at minimum. Posted Jul 31, 2022 14:49 UTC (Sun) by Posted Jul 31, 2022 15:04 UTC (Sun) by Requiring me to create an account is pretty much guaranteed to make me walk away. I don't want loads of accounts everywhere. And what happens if I make a drive-by, and six months later want to make another, and have forgotten my account details? Unless, of course, I use the same account details for all of them ... Cheers, Posted Jul 31, 2022 18:18 UTC (Sun) by Posted Jul 31, 2022 19:08 UTC (Sun) by By the latter I don't just mean "newbs can understand it" I also mean "the time needed to log in, compared to memorizing and manually-typing the password, is similar or less" - inconvenient security measures become unused security measures. Bitwarden is about as user-friendly as it gets, and it's about as close to trustworthy as it could possibly be for something that runs on other people's computers. So close yet so far. Posted Aug 1, 2022 13:34 UTC (Mon) by My passwords are 80+ characters (as long as it's accepted :/ ), so the password manager is faster :) . > Bitwarden is about as user-friendly as it gets, and it's about as close to trustworthy as it could possibly be for something that runs on other people's computers. I just have WebDAV access hooked up where needed for my keepass database. Works well enough for me. But I also know I'm not "everyone". Posted Aug 1, 2022 16:33 UTC (Mon) by Unless there's something else that disqualifies it for you? Posted Aug 1, 2022 17:11 UTC (Mon) by Posted Aug 1, 2022 18:16 UTC (Mon) by This setup is surely not for everyone. I'm very happy with it, though. Posted Aug 1, 2022 22:26 UTC (Mon) by As for "ctrl+shift+a", that instead invokes a Firefox add-on instead of some desktop-global shortcut. It has enough information to be accurate about the website at least. Posted Aug 2, 2022 1:31 UTC (Tue) by Posted Aug 4, 2022 8:41 UTC (Thu) by Posted Aug 1, 2022 14:04 UTC (Mon) by Which must be why mailing lists chose to ignore it. Posted Aug 2, 2022 20:30 UTC (Tue) by Now, it's fair to say that this is because Exchange and Outlook are broken email clients that are unsuitable for doing any real kind of communication, but that doesn't really help, fixing it usually means allowing non-outlooks clients that lack some security features ("insufficient" spam filter, no platform attestation, etc), and that becomes such a hard sell that it's not worth it for all but the most important projects (like linux itself). It's a sorry state of affairs, to be honest. Judging from the recent linux foundation effort to provide such email accounts for people in my situation I doubt my situation is unique. Posted Aug 2, 2022 23:14 UTC (Tue) by So email is fragmented too and the otherwise supersmart and highly respected people who buried their head in the sand and kept claiming email is perfect are partly responsible for that sad state of affairs. Posted Aug 3, 2022 0:55 UTC (Wed) by That's not fair; it's more accurate to say that the rest of the world moved on/away, perpetually chasing after the latest shiny. No matter how "supersmart" someone is, email is a _service_ and that costs money to provide. Free webmail pretty much killed the ability to charge for non-corporate email (with the final blow delivered by still-don't-be-evil Google) which also destroyed the ability to meaningfully advance the protocol stack -- it didn't help that the 800lb gorilla (ie Microsoft/Outlook) was actively crapping all over the place. So instead, email is being replaced with a hundred different purpose-specific silos, each trying to monopolize their users' attention, mostly paid for through advertising/datamining -- which further reinforces this balkanization as the focus becomes growth for growth's sake, which requires user lock-in. Dropbox is a really good example of this, but so is instant messaging. Until non-greybeards actually start caring about truly owning their own data, nothing will change -- because again, we're competing with "free". Posted Aug 3, 2022 8:43 UTC (Wed) by > That's not fair; it's more accurate to say that the rest of the world moved on/away, perpetually chasing after the latest shiny. I wrote "partly" - you gave a great summary of the rest. The open-source community can and has been pioneering amazing things without much business support but this required great leadership. It does not happen when leaders claim that everything is fine, there is no problem and nothing to fix. For instance I'm amazed at how little publicity gname.org ever got. It was an amazing service that you could only discover... by chance. I'm not saying it was the answer to all email problems but it was at least successfully solving a number of them. Posted Aug 3, 2022 10:23 UTC (Wed) by We need to think outside the box and - rather than some micropayment system - see if we can come up with a "you scratch my itch, I'll scratch yours" system. Like the Linux Foundation. Like some sort of trade association. Where small groups can come together and seriously push the Open Source "many eyes" and fast response advantages. The problem is getting enough people to buy in at the start to get it off the ground. Cheers, Posted Jul 31, 2022 15:13 UTC (Sun) by It takes like two seconds (I know because I just did it right now) to "register an account" in say, https://gitlab.gnome.org/ because it allows you to reuse among other things accounts you may have in github/gitlab.com. So I don't find this a barrier to entry for drive by contributors. Posted Aug 4, 2022 8:43 UTC (Thu) by Fortunately these days the only ones that are left outstanding are large enough that they'll probably be around forever. Posted Aug 4, 2022 13:13 UTC (Thu) by > Fortunately these days the only ones that are left outstanding are large enough that they'll probably be around forever. The question is rather, how long will the identity provider keep your account around. Posted Jul 29, 2022 20:20 UTC (Fri) by This is of course highly depending on the tool but generally wrong with the decent ones. Quoting your other comment at https://lwn.net/Articles/902832/ Yes, this is it, you got it. This is exactly the "customization" feature, it's not a bug: in just a few clicks you can subscribe to only the specific threads that you're interested in and ignore all the rest of the project activity. Or choose to be notified of everything, it's all up to you. Adjusts to both drive-by contributors and maintainers, no need to be either "in" or "out" of the mailing list. I don't understand why so many email fans scripting their way out of a today's firehoses of information overload fail to see why many people love this. Of course one of the notifications possibilities is... email (there are others). Hardcore email fans keep talking about "web-based tools" like it's a thing but the web is just a user interface, it's not the tool. Granted it's the main and most common and most developed interface but unlike email it's still just an interface. For instance some people interact with Githab directly from their editor on a routine basis. Funny enough if you feel to limited by Githab's default filtering options then you can simply request to be notified of everything by.... email and then fall back into some customizable-to-death workflow. The fact that email fans with computer science degrees can't seem to make the difference between the data versus the notifications (because email does not) even when they look at alternatives shows how detached from the rest of the world they've become. Posted Aug 1, 2022 17:15 UTC (Mon) by As a coincidence I was just made aware of this: https://github.com/wandersoncferreira/code-review (Github code reviews from Emacs). So yes: the web is of course the most common and best supported interface (not just for software development) but unlike email the web is not a hard requirement of all the misnamed "web-based tools". This level of "forge ignorance" can be depressing. I know some vim users who write kernel code without having ever tried fugitive or anything like it so there is clearly a long way to go. Wake up and smell the coffee. Posted Aug 1, 2022 18:25 UTC (Mon) by One thing that "web-based" tools often get right (not always, but often enough to be useful) is to be split into a backend API and a HTML + CSS + JS frontend. You can thus easily build tools like the one you linked that talk directly to the backend API, bypassing the web front end completely. This is something that's far harder to build with e-mail, since e-mail is fundamentally designed to send messages between humans, where HTTP is designed for machines to talk to each other. Posted Aug 1, 2022 22:25 UTC (Mon) by Posted Jul 21, 2022 22:41 UTC (Thu) by Posted Jul 26, 2022 2:32 UTC (Tue) by With mailing lists I can reply to a mail and loop in other people or groups. Even if those people or groups belong to different organisations. If i'm discussing an issue in a Debian context and I feel it could use input from upstream I can simply forward the mail CC them and then upstream can participate in the discussion like anyone else. But increasingly there is no upstream mailing list to post to, so the only options are to either forward to specific individuals (often righly seen as rude) or learn to use whatever discussion service upstream uses. Having done so one is likely to have to manually proxy information back and forth between upstream and downstream. Posted Jul 21, 2022 14:22 UTC (Thu) by That's ultimately up to the new generation. I've received *one* non-trivial contribution in the past fifteen years. Any new workflow has to yield benefits for the _current_ contributors, not mythical unicorn developers that will probably never materialize. Because the hard truth is that there are _not_ hordes of skilled-and-willing developers out there who would _love_ to contribute to my projects, but are turned off by using email instead of whatsapp or FB messenger. Posted Jul 21, 2022 16:13 UTC (Thu) by And your attitude as expressed here is perfectly reasonable; if you don't mind the "new guard" bypassing your project entirely and doing their own thing in the same space, that's great. But it is unreasonable to simultaneously complain that people are working on a new thing in the same space as your thing, and also refuse to change the way you work to make working on your thing attractive to people who work on the new thing. Make your choice: do you want to minimise change for existing contributors, and accept that future contributors (if any) might instead start a fork or new thing, or do you want to adopt changes that increase the chance of future contributors and risk losing existing contributors? Both of those are reasonable choices to make - it sounds like you've chosen to risk losing future contributors to a fork or a new thing instead of changing workflow, and that is absolutely A-OK; however, you then can't complain that the new generation aren't interested in working with you when they do their own thing (and I can find no evidence that you've complained about the new generation not supporting your project, which implies that you're making reasonable choices). Posted Jul 25, 2022 6:52 UTC (Mon) by Which may or may not be directly relate to the type of project which begs the question which project would that be? In the end of the day it all boils down to contributors doing CBA's. > Any new workflow has to yield benefits for the _current_ contributors, not mythical unicorn developers that will probably never materialize. Right you always want to cater to the core volunteers. Posted Jul 25, 2022 13:32 UTC (Mon) by Oh, it's absolutely due to the type of project -- it started out as reverse-engineering a family of dye-sublimation photo printers so they could be useful without a Windows PC, and was eventually mostly subsumed into gutenprint, which itself has only four semi-active contributors over the past decade, and a nearly entirely technically clueless userbase. Meaningfully contributing requires a pretty niche set of skills, or at least a lot of motivation. Oddly enough, said technically clueless userbase (mostly on MacOS, FWIW) manages to ask for help just fine using the email or discussion forums we (==sourceforge) provide. We provided github/gitlab mirrors and to date there's been *one* public fork. There simply aren't folks interested in contributing, via _any_ means. The other project I'm actively involved with is also hardware-centric, and that requires even more niche skills with an even steeper learning curve to contribute meaningfully. Most of the userbase is also technically clueless, but even those that are motivated enough to try an contribute find the realities of hard-realtime memory-constrained bare-metal programming on hardware that mostly lacks specifications (and a "core" codebase of several hundred thousand LOC) too much to wrap their heads around. Posted Jul 21, 2022 12:02 UTC (Thu) by Maybe. And I take your point entirely - I do try and drive change forward, BUT. I *always* *try* to make sure old and new can run in parallel. Otherwise, if the stewardship changes, you run the risk of the young bloods being the lunatics running the madhouse ... When llvm moved to discourse, I didn't bother to move with it. Is there any way you can communicate with discourse as if it were a mailing list? I don't remember being pointed at anything like that ... Two personal experiences which show clearly the problems with running with the new and/or not running with it ... My wife is an ex-Guider. Recently Girlguiding UK went very much "all admin is on-line, all the girls' progress must be tracked on-line, blah blah blah". I don't know how many new rainbows, brownies and guides it attracted - probably very few. What it DID achieve - in an organisation with not enough volunteers - was to ensure that pretty much all the over-50 leaders just walked out ... We're heavily involved (as volunteers) with Parkinsons UK. We're also desperate to hand over our roles to a younger generation. But yet again, a lot of changes - computerised changes - are making our life more difficult. (The main problem is an octogenarian losing her marbles, but head office don't help ...) Given that so many of the people we (try to) help can't even cope with smart-phones, let alone anything fancier, head-office running headlong into change really doesn't do anybody any favours. And I'm well aware that a lot of this is being driven by legal requirements, but when you start telling your volunteers what to do, and you get push back saying it's difficult or impossible, it's NOT wise to push ahead regardless. And for the most part, European law is very tolerant of people who do their best. Cheers, Posted Jul 21, 2022 13:59 UTC (Thu) by Personally, I find the "I demand *you* perform additional work for *me*" attitude rather off-putting, especially as I've grown older and have learned to recognize it as an invariable sign of abusive and toxic relationships. Posted Jul 21, 2022 14:16 UTC (Thu) by No - it's more nuanced. If the old guard want the new guard to join their project, rather than starting a new one in the same space or ignoring the problem completely, then the old guard need to make their project attractive to the new guard. If the old guard don't do that, and then demand that the new guard work on their project rather than starting a new project in the same space, or forking the old project and changing communication methods etc, they are demanding that the new guard do additional work for the old guard. And it's then no surprise to an outside observer that firstly the new guard go off and do their own thing instead of working on the old guards' projects, and secondly that institutional knowledge held by the old guard is lost, because they've driven off the very people who might learn from them. Posted Jul 21, 2022 14:26 UTC (Thu) by "attractive" meaning something more than "something I consider important / useful and I'd be put in a hard spot should it go away" ? Otherwise you're basically saying that oranges would be more popular with folks that like bananas if they'd only change themselves into bananas first. Posted Jul 21, 2022 15:04 UTC (Thu) by Attractive meaning "the new guard want to contribute to the existing project, rather than forking it or starting a new project to solve the same problem". If the new guard's reaction to the old project being at risk of going away is to start their own thing, that's cool - but the old guard can't both complain that people are starting new projects in "their" space, It's the combination of "I will not change my project to attract your contributions, but I will expect you to contribute to my project rather than start your own or go to a competing project" that's not OK. You have a choice: Both of those choices are reasonable positions, and you may even prefer it if the new guard bypass you and build their own thing instead of contributing to your thing. Just be aware that if you're saying "we are firmly oranges here", you can't then demand that people who like bananas choose oranges instead to suit your desires. Posted Jul 21, 2022 15:15 UTC (Thu) by How often does this actually happen, though? (Especially in a way where both old/new continue on in competition instead of one rapidly dying?) What I see all the time is "competing" projects being driven by fundamentals such as the chosen license or implementation language. Posted Jul 21, 2022 15:29 UTC (Thu) by See the complaints about uutils in https://lwn.net/Articles/857599/ for an example; there was also similar complaining about busybox in the past, since it distracted people from making the GNU utilities smaller. Or the complaints about the "new guard" working on Wayland instead of X11. There's plenty of people willing to complain about other people's use of time :-( Posted Jul 21, 2022 15:51 UTC (Thu) by The quotes make me think you're aware of this…but I thought that the Wayland developers *are* (heavily overlapped with) the X11 developers. Posted Jul 21, 2022 16:25 UTC (Thu) by Indeed, which makes the complaints about people choosing to work on Wayland instead of X11 even less reasonable - in many senses, Wayland Posted Jul 21, 2022 17:47 UTC (Thu) by But yes, for many years X11 was effectively one developer, who has thrown his weight behind Wayland, and I think X11 long ago went into security maintenance cum Wayland compatibility mode. Cheers, Posted Jul 21, 2022 13:24 UTC (Thu) by There's also the fact that most people are probably using email through a web interface or an IMAP client on their phone where most of the advantages of email that people talk about simply don't exist. I'm a fan of the advantages of email, but if your experience of email is worse than a Discourse site, I can see why people are moving away from email. (I'd discuss who I'd blame, but there's very little point.) Posted Jul 23, 2022 6:41 UTC (Sat) by This is the pervading thought for me when I'm reading these conversations. Nobody ever ported any of the advantages of email over to new form factors. I love threading and threaded UIs, but I haven't seen one that looks good on my phone. None of the email clients on my phone can properly render patch text from plain text emails. I really am sad by some of what has been lost on the "new internet" and in the new way of building UIs. In many cases they're genuinely not as well-featured and less accessible. At the same time, nobody really took the time to port those features into modern systems and form factors. I know that Alan Kay was sad that Steve Jobs dropped the stylus from his touch designs, but he did, so we have to design for fat-fingers and not just for fast two-handed typists if you we want a paradigm to matter for the ubiquitous devices of the day. Posted Jul 25, 2022 3:15 UTC (Mon) by Big surprise, because there's a lot more fat fingers than fast two-handed typists in the world. If you want ubiquitous, you gotta design for what's ubiquitous. Posted Jul 30, 2022 20:57 UTC (Sat) by Posted Jul 23, 2022 13:49 UTC (Sat) by That throughout the decades nobody has really come up with a simple, widely accepted, and effective way to verify users, senders, and verify the contents of emails. Spam companies make millions from mitigating the spam problem rather then fixing it. Google and other indexers Web-UI providers make even more based on fact that few people are really interested or able to deal with email themselves, even as businesses. And that open source projects for email services and clients ground to a pretty much a halt around the same time that Gmail started offering 1Gb of storage. I have always hated email for these reasons and more. And while I understand that many people have spent weeks carefully crafting their own personal setup years ago and haven't touched it since... there is very little tolerance for that sort of thing among people today. And for good reason. This is why the alternative for discourse isn't email. It's software like facebook groups, github, and discord. When examined in that light it is pretty obvious that discourse is the plain superior option. If Python is successful then this should provide a template for other projects and groups to follow to hopefully claw and drag our way out of the abyss that is proprietary walled gardens. Posted Jul 23, 2022 15:19 UTC (Sat) by Except for the whole "every silo has to be independently polled, with an independent client that makes automation very difficult (if not outright impossible), making it impossible to scale" problem. Because its in the interest of each of those silos to make everything happen inside said silos, so users can be mined and monetized and "engaged" all way into the sewer. Sure, the "new way" has made a much lower barrier to entry for the masses (and that's genuinely good!) but no matter how useful a ladder is for getting to the roof of your two-storey house, it's just not going to work for getting to the top of a 40-floor building! At the end of a day, email is not even a service, but a "protocol" -- but most people only see it as an "app". </garumph> Posted Jul 23, 2022 17:04 UTC (Sat) by I think Linus Torvalds once answered in some interview "I actually don't really care what Windows does, it's not that interesting". Of course that's too extreme because you do want to "steal" good ideas from your competition, (as opposed to ranting about how it sucks), and this is what open-source has been doing since forever - except for email. However that sentence makes a good point: focus on the engineering of _your_ "product"; on what you want to win. There's a strange emotional attachment that develops when you start mastering a complex tool. In a former life I wrote git scripts to help me backport thousands of patches. That and others things cause me to regularly marvel at git's power and flexibility. Fortunately, I'm brought back down to earth every time someone new to git tries to do something simple and asks me for help. If you never get back down to earth then you get complacent: "what email problems?" "Simple things should be simple, complex things should be possible." We've all seen people in these discussions saying things like "Good, email keeps noobs away from the Linux kernel". This is both sad and funny (and fortunately rare). This is funny because it is almost admitting that there are some problems but no wait, these are not problems they're actually a test! It's by design! Hilarious. Asking "noobs" to pass an "email test" instead of a coding test so they are immediately ready to deal with the high volume of spam and noob contributions that streched maintainers must be able to filter with when... using email? OK, I can see a little bit of recursive logic here :-) Posted Jul 24, 2022 0:45 UTC (Sun) by Is it so strange? Literally every field of human endeavour has a notion of "expertise" that is usually celebrated [1], and part of that mastery is learning to effectively utilize the tools of that field. What makes this field so special that what's "good enough" for novices is expected (nay, demanded) to be enough for everyone else as well? Information management and effective communication are more strategic than ever these days. But instead, we're intentionally crippling ourselves (and handing our entire digital identities over to $bigcorp [2]) because we can't be bothered. [1] modern politics notwithstanding Posted Jul 24, 2022 17:30 UTC (Sun) by Not 100% sure what particular solution you're referring to but good tools should always make "simple things simple and complex things possible". The fact is that most people find it very convenient[*] to submit a one-line, typo fix on Githab and then walk away but very time-consuming when the code review requires finding and configuring properly an email client that supports plain text, bottom-posting and what not. These requirements are dying much faster than what's left of email. [*] and yes that convenience comes at a price, we know. > But instead, we're intentionally crippling ourselves (and handing our entire digital identities over to $bigcorp [2]) because we can't be bothered Yes, we know. We've read that a million times. And some. Yet another change of subject. What alternative to $bigcorp have you contributed to? Did you help understand why email is dying or even help fix it? Have you explored or contributed to any new alternative, can you recommend any? I did absolutely nothing to help fix this particular problem but I'm not complaining that people are too stupid to leave email (a.k.a. "blame the user") in every discussion either. People who complain that Windows is evil (tm) generally contribute to Linux or some alternative. What makes email so special that most people who keep complaining about its competition do absolutely nothing to fix it, not even acknowledging the reasons why it's dying and keep claiming that people are just too stupid to see how good it is? Posted Jul 25, 2022 3:46 UTC (Mon) by I don't actually have an inherent problem with $bigcorp providing services; what I take issue with is intentional lock-in that tends to come along with it. And nothing locks you in like using $bigcorp's identity services tied to a domain you don't control. But you are right, the battle has been over for two decades. Google giving away a gig of storage for free was the beginning of the end. You can't compete with services by providing software. And when you're providing a service, you can't compete with free unless you have some other way of making money -- such as selling (indirect) access to the data you've mined. which in turn is only possible if you're operating at a sufficiently massive scale to where micropennies finally pile up. And as I mentioned earlier, most folks don't see anything beyond the "app" -- I'm not asked for my email address; instead I'm asked 'gimme your gmail' or something like that. (What do I do? I'm self-hosting, and made minor contributions to various federated systems. I advocate the importance of owning your own digital identity, and that starts with domain names. And that's about all I can do, because at the end of the day, you're still in competition with "free") Posted Jul 25, 2022 6:13 UTC (Mon) by I say this as someone equally guilty of this kind of feeling sometimes... if I had to climb the stairs to the top to master it, everyone else should have to do the same. If newcomers can just take the escalator instead of the stairs, that effort I put in was just superfluous and I don't get to feel as proud of it. We all know email is a terrible medium. I haven't tried things like Discourse, I don't read any mailing lists or contribute much to OSS nowadays. But the centre of gravity has moved, and just as when I started I had to join mailing lists, now you have to join something else. It's still the same bazaar, with a different entry path. Posted Jul 22, 2022 7:27 UTC (Fri) by Wait, there's at least one guy actually trying to do something: https://lars.ingebrigtsen.no/2020/01/06/whatever-happened... On his "spare time" when has any. Amazing results (when it's not down) considering it's a single-handed effort but never heard of anyone else. Everyone else seems to be just whining about the email competition: "single point of failure", "closed-source", "incompatible", "lost privacy", "slow and bloated", "no threading", "customer lock-in", "evil BigTech",... Mostly correct and valid but neither fixing email nor offering vague idea of any solution or alternative. There's seems to be more people trying to fix IRC: https://bnc4free.com/?page_id=26 https://quassel-irc.org/about > why have Greg Beards (tm) I _swear_ that typo was unintentional. Posted Jul 25, 2022 12:58 UTC (Mon) by I will try to give a possible answer to your question about why have Greybeards tend to try and pick apart the problems of the replacement tools versus fixing the underlying tool. 1. There is a lot of 'I have spent a lot of time building my own infrastructure to deal with this crap' so there is the mastery item you have brought up later. So instead, it is easier to go find one or two lines in someone pointing out problems and then disect those to show all the logical fallacies and prove to ourselves it isn't worth spending time on. [Basically the brain has a lot of feedback loops to make this the preferred action as we age.. we are trying to maintain a safe environment we know while fixing things or moving to a different environment trigger all this fear and anger.] Posted Jul 28, 2022 15:20 UTC (Thu) by Maybe one should ask "why are Grey Beards so attached to having discourse ?". I sometimes wonder if the youngsters crawling github and such are even interested in have any discussions at all ? My experience browsing github is that most projects there have exactly no advertised discussion fora. Not github discussions, no mailing list, nor anything else. The only things available are 'issues' (oft abused by users as discussion only to be told 'go away' after a while) and 'pull requests'. Ofcourse this is all anecdotal, but still says something. Posted Jul 29, 2022 20:20 UTC (Fri) by Posted Jul 21, 2022 17:25 UTC (Thu) by I want my Pipermail back (plus reporting of Message-ID headers in the archive, like many patched locally, and ideally mbox downloads, ideally also of individual messages). Posted Jul 22, 2022 2:49 UTC (Fri) by Posted Jul 22, 2022 12:24 UTC (Fri) by ## Leaving python-dev behind **q3cpma** (subscriber, #120859) [Link] (25 responses) ## Civility **michaelkjohnson** (subscriber, #41438) [Link] (1 responses) ## Civility **orib** (subscriber, #62051) [Link] ## Leaving python-dev behind **amacater** (subscriber, #790) [Link] (22 responses) ## Leaving python-dev behind **NYKevin** (subscriber, #129325) [Link] (8 responses) ## Leaving python-dev behind **amacater** (subscriber, #790) [Link] (7 responses) ## Disadvantage of modern forum/chat software **dskoll** (subscriber, #1630) [Link] (6 responses) ## Disadvantage of modern forum/chat software **mattdm** (subscriber, #18) [Link] (5 responses) ## Discourse discoverability and archivability **michaelkjohnson** (subscriber, #41438) [Link] (1 responses) ## Discourse discoverability and archivability **dskoll** (subscriber, #1630) [Link] ## Disadvantage of modern forum/chat software **mm7323** (subscriber, #87386) [Link] ## Disadvantage of modern forum/chat software **comex** (subscriber, #71521) [Link] (1 responses) ## Disadvantage of modern forum/chat software **anton** (subscriber, #25547) [Link] (I wish there was a standard for editable email, for Discourse and similar software to use. It could just consist of a header that means "this email is a revision to the email with such-and-such Message-ID". For Usenet there is "Supersedes: <old-message-id>". However, AFAIK NNTP servers honoring Supersedes: don't give the old message to the user, and I am not aware of NNTP clients having a functionality like you desire. ## Leaving python-dev behind **smoogen** (subscriber, #97) [Link] (11 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (10 responses) ## Leaving python-dev behind **smoogen** (subscriber, #97) [Link] (6 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (3 responses) [2] As long as you own your own domain. Which any organization that matters will. ## Leaving python-dev behind **notriddle** (subscriber, #130608) [Link] (1 responses) ## Leaving python-dev behind **mattdm** (subscriber, #18) [Link] ## Leaving python-dev behind **Avamander** (guest, #152359) [Link] ## Leaving python-dev behind **misc** (guest, #73730) [Link] (1 responses) > microsoft.com ## Leaving python-dev behind **Wol** (subscriber, #4433) [Link] Wol ## Leaving python-dev behind **NYKevin** (subscriber, #129325) [Link] (2 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (1 responses) ## Leaving python-dev behind **NYKevin** (subscriber, #129325) [Link] ## Leaving python-dev behind **iabervon** (subscriber, #722) [Link] ## Leaving python-dev behind **q_q_p_p** (guest, #131113) [Link] (1 responses) ## Leaving python-dev behind **hodgestar** (subscriber, #90918) [Link] ## A note on "mailing list mode" **mattdm** (subscriber, #18) [Link] ## On threading... **mattdm** (subscriber, #18) [Link] (2 responses) ## On threading... **kpfleming** (subscriber, #23250) [Link] (1 responses) ## On threading... **mattdm** (subscriber, #18) [Link] ## Leaving python-dev behind **pabs** (subscriber, #43278) [Link] ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (73 responses) ## Leaving python-dev behind **Wol** (subscriber, #4433) [Link] (57 responses) Wol ## Leaving python-dev behind **mjg59** (subscriber, #23239) [Link] (56 responses) ## Leaving python-dev behind **mmirate** (subscriber, #143985) [Link] (45 responses) ## Leaving python-dev behind **mjg59** (subscriber, #23239) [Link] (3 responses) ## Leaving python-dev behind **Wol** (subscriber, #4433) [Link] Wol ## Leaving python-dev behind **mmirate** (subscriber, #143985) [Link] ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (40 responses) ## Leaving python-dev behind **kpfleming** (subscriber, #23250) [Link] (35 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (32 responses) ## Leaving python-dev behind **Vipketsh** (guest, #134480) [Link] (31 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (26 responses) *don't* invest that time, my GMail (or whatever free mail provider I use) inbox fills with e-mail from lists and things that I simply don't care about - so I am forced to invest the time in making a suitable workflow if I want to contribute to a project that uses an e-mail based workflow. *then* they will get value from this, because it's a workflow that's customized to their way of working. But because this is an up-front investment (before they're interested in working on your project), you're asking them to spend a lot of effort on "joining" the project when they may decide to leave shortly afterwards. ## Leaving python-dev behind **Vipketsh** (guest, #134480) [Link] ## Leaving python-dev behind **JanC_** (guest, #34940) [Link] (24 responses) ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (23 responses) ## Leaving python-dev behind **mmirate** (subscriber, #143985) [Link] (22 responses) ## Leaving python-dev behind **rahulsundaram** (subscriber, #21946) [Link] (21 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (20 responses) ## Leaving python-dev behind **mpr22** (subscriber, #60784) [Link] (16 responses) ## Leaving python-dev behind **Wol** (subscriber, #4433) [Link] (15 responses) Wol ## Leaving python-dev behind **mathstuf** (subscriber, #69389) [Link] (8 responses) ## Leaving python-dev behind **mmirate** (subscriber, #143985) [Link] (7 responses) ## Leaving python-dev behind **mathstuf** (subscriber, #69389) [Link] ## Leaving python-dev behind **kleptog** (subscriber, #1183) [Link] (5 responses) ## Leaving python-dev behind **mmirate** (subscriber, #143985) [Link] (4 responses) ## Leaving python-dev behind **mbunkus** (subscriber, #87248) [Link] (2 responses) ## Leaving python-dev behind **mathstuf** (subscriber, #69389) [Link] (1 responses) I also use Syncthing, but unlike the other poster, do get sync conflicts. But the Keepass db format keeps track of modification times for each record, such that "Merge another database" has never lost information for me. I can make changes with abandon on my Android phone and tablet and multiple desktops, knowing that worst case I'll need to merge. ## Leaving python-dev behind **edgewood** (subscriber, #1123) [Link] ## Leaving python-dev behind **cortana** (subscriber, #24596) [Link] ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] ## Leaving python-dev behind **bartoc** (subscriber, #124262) [Link] (4 responses) ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (3 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (2 responses) ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (1 responses) ## Leaving python-dev behind **Wol** (subscriber, #4433) [Link] Wol ## Leaving python-dev behind **rahulsundaram** (subscriber, #21946) [Link] (2 responses) ## Leaving python-dev behind **cortana** (subscriber, #24596) [Link] (1 responses) ## Leaving python-dev behind **gioele** (subscriber, #61675) [Link] ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (3 responses) > I sometimes wonder if the youngsters crawling github and such are even interested in have any discussions at all ? My experience browsing github is that most projects there have exactly no advertised discussion fora. Not github discussions, no mailing list, nor anything else. The only things available are 'issues' (oft abused by users as discussion only to be told 'go away' after a while) and 'pull requests'. Of course this is all anecdotal, but still says something. ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (2 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (1 responses) ## Leaving python-dev behind **mathstuf** (subscriber, #69389) [Link] ## Leaving python-dev behind **hodgestar** (subscriber, #90918) [Link] (1 responses) ## Leaving python-dev behind **plugwash** (subscriber, #29694) [Link] ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (3 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] ## Leaving python-dev behind **johannbg** (guest, #65743) [Link] (1 responses) As to what that is for individuals, people can only speculate since it differs from person to person. ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] ## Leaving python-dev behind **Wol** (subscriber, #4433) [Link] Wol ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (8 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (7 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (6 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (5 responses) *and* refuse to make the existing projects attractive to new guard contributors. ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (4 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (3 responses) ## Leaving python-dev behind **mathstuf** (subscriber, #69389) [Link] (2 responses) ## Leaving python-dev behind **farnz** (subscriber, #17727) [Link] (1 responses) *is* X12, dumping the bits that need a complete redesign in the modern era of GPUs, and trying to get the core right. ## Leaving python-dev behind **Wol** (subscriber, #4433) [Link] Wol ## Leaving python-dev behind **cstanhop** (subscriber, #4740) [Link] (10 responses) ## Leaving python-dev behind **gnu_lorien** (subscriber, #44036) [Link] (2 responses) ## Leaving python-dev behind **intelfx** (subscriber, #130118) [Link] ## Leaving python-dev behind **JanC_** (guest, #34940) [Link] ## Leaving python-dev behind **rjones** (subscriber, #159862) [Link] (6 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (5 responses) > When examined in that light it is pretty obvious that discourse is the plain superior option. ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (4 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] (3 responses) [2] whose ToS says they can drop you at any time for any reason, with zero recourse ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] (1 responses) ## Leaving python-dev behind **pizza** (subscriber, #46) [Link] ## Leaving python-dev behind **interalia** (subscriber, #26615) [Link] ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] ## Leaving python-dev behind **smoogen** (subscriber, #97) [Link] 2. We have all lived long enough to know that if we 'fix' this tool, it will break a LOT of other people who will yell at us for doing so. Our infrastructure will break for 'no reason'. Tons of people we know will also have to deal with the breakage. Better to just let sleeping rabid dogs lie. 3. This one hits me a lot when I finally start to fix something. 'Why is it so important to fix now? why didn't you fix it before?' Sure it takes all these workarounds to make it 'work' but those are just cost of doing business. ## Leaving python-dev behind **Vipketsh** (guest, #134480) [Link] (1 responses) ## Leaving python-dev behind **marcH** (subscriber, #57642) [Link] ## Accessibility concerns **mirabilos** (subscriber, #84359) [Link] (1 responses) ## Accessibility concerns **pabs** (subscriber, #43278) [Link] ## Leaving python-dev behind **ceplm** (subscriber, #41334) [Link]
true
true
true
null
2024-10-13 00:00:00
2022-07-20 00:00:00
null
null
null
null
null
null
23,573,306
https://github.com/trekhleb/machine-learning-experiments/blob/master/assets/recipes_generation.en.md
machine-learning-experiments/assets/recipes_generation.en.md at master · trekhleb/machine-learning-experiments
Trekhleb
We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation.
true
true
true
🤖 Interactive Machine Learning experiments: 🏋️models training + 🎨models demo - trekhleb/machine-learning-experiments
2024-10-13 00:00:00
2019-11-14 00:00:00
https://repository-images.githubusercontent.com/221631156/7bdc2700-8ee2-11ea-835d-d4a9d7260c69
object
github.com
GitHub
null
null
20,634,522
https://observablehq.com/@yurivish/the-long-tail-of-dog-names
The Long Tail of Dog Names
Yuri Vishnevsky
Experiment and prototype by building visualizations in live JavaScript notebooks. Collaborate with your team and decide which concepts to build out. Use Observable Framework to build data apps locally. Use data loaders to build in any language or library, including Python, SQL, and R. Seamlessly deploy to Observable. Test before you ship, use automatic deploy-on-commit, and ensure your projects are always up-to-date.
true
true
true
Note: This is an excerpt from a longer piece in development on the dogs of New York. What is the relationship between dog names and dog breeds? Here's a map in which dog names appear next to the breeds to which they are disproportionately attached. Hover over a breed for details — unrelated names will fade away and the strongest connection will be marked in <span style="font-weight: bold; color: rgb(254, 82, 69);">red</span>. This visualization of the most popular 25 dog breeds in New York is based on an ex
2024-10-13 00:00:00
2019-05-08 00:00:00
https://static.observabl…34568d5f5845.jpg
article
observablehq.com
Observable
null
null
4,536,078
http://platformed.info/facebook-yelp-pinterest-growth-hacking-startup/
null
null
Request unsuccessful. Incapsula incident ID: 186000170776569088-940624462338458703
true
true
true
null
2024-10-13 00:00:00
null
null
null
null
null
null
null
14,623,771
http://www.freepatentsonline.com/20170175413.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,417,595
http://blog.koding.com/2014/10/new-release/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,914,240
https://github.com/dosyago/p2..git
GitHub - dosyago/chai: chai - Experience Zero Trust security with Chai! Convert and view documents as vivid images right in your browser. No mandatory downloads, no hassle—just pure, joyful security! 🌈
Dosyago
Visit us on GitHub *Please note: right now this repository is mirroring the version of chai used in BrowserBox and has not been tested to run separate to BrowserBox. The goal eventually is to create an isomorphic chai that mirrors BrowserBox's latest changes while continuing to operate independently when not configured nor run as part of BrowserBox remote browser isolation system.* Convert documents into spectacular images for each page, and view them securely in your browser! No downloads, no third-party apps—just pure, joyful, **Zero Trust** goodness! Proudly part of the BrowserBox Pro cloud browser product by Dosyago. **Zero Trust Security**: Your document stays with you, always! 🛡️**Universal Formats**: PDF, DOC, XLSX—we speak all languages! 🌐**High-Performance**: Get ready to be amazed by the speed! 🚀**Open Source**: Built by the community, for the community! 💖 **Send Us Your Document**: Upload a file or just drop a URL.**Transformation Time**: We convert it into a beautiful gallery of high-quality images. 🎨**Enjoy**: Open your browser to a vivid, scrollable display of your document. 🌠 - Zero Trust Document Viewer - Secure PDF Viewer - Browser-based Document Viewer - Multi-Format Document Viewer ``` # Clone the treasure! git clone https://github.com/dosyago/chai.git # Enter the magical kingdom cd chai # Set up the wizardry ./scripts/setup.sh # Ignite the spark! ./scripts/restart.sh ``` *If you just want to run the server temporarily, you can hit npm start instead of the pm2-using ./scripts/restart.sh* - Got SSL certs? Place them in `$HOME/sslcerts/` and we'll go HTTPS! 🌐 - Tweak the `secret=<your secret>` URL parameter for extra magical powers. 🌟 - Document lifespan is 3 days by default, but feel free to change that in `/public/uploads/clean.sh` . 🕰️ Want more? Use the POST endpoint with a `secret=` parameter to authorize conversion via a secure HTTPS API. We recommend a beefy machine with at least 4 cores and 8 GB RAM for a spellbinding experience. **No affiliation* Non-commercial (Polyform Noncommercial 1.0) use or commercial licenses purchasable. For custom licensing options, email us at [email protected]. **Enjoy your Documents Responsibly with Chai! 🌈**
true
true
true
chai - Experience Zero Trust security with Chai! Convert and view documents as vivid images right in your browser. No mandatory downloads, no hassle—just pure, joyful security! 🌈 - dosyago/chai
2024-10-13 00:00:00
2019-11-15 00:00:00
https://opengraph.githubassets.com/85b2afba76af7fb9de3ca82bc1b038593fe83c79ed142db5e1b5ba5490663213/dosyago/chai
object
github.com
GitHub
null
null
30,773,115
https://simpleflying.com/usglobal-airways-32-years-old-never-operated-flight/
USGlobal Airways: The 33-Year-Old Carrier That's Never Operated A Flight
Mark Finlay; Alexander Mitchell
### Summary - USGlobal Airways, the oldest American startup airline, has never flown a single flight in its over 30-year history. - The airline, originally called Baltia Airlines, faced numerous setbacks and delays in obtaining certification and securing funding. - Despite changing its name to USGlobal Airways and planning to fly from New York Stewart International Airport, its operations never took off. The story of USGlobal Airways began in 1989 when the carrier seemed like any other startup airline. Today, however, this bizarre carrier seemingly remains unfinished, and relatively few know the full context behind a 33-year-old airline that has not flown a single flight. Surprisingly, USGlobal Airways was not entirely a joke. They had actual plans to start flights between New York's John F. Kennedy International Airport (JFK) and various destinations within the former Soviet Union, which did not seem like a ludicrous business model in the years following the collapse of the USSR. The carrier initially went by the name Baltia Airlines and acquired a Boeing 747-200. However, founder Igor Dmitrowsky would make a few rather unusual decisions that would delay the carrier's certification. Get all the latest aviation news on Simple Flying! Today, the airline is no closer to taking off than it was back then. In this article, we will look deeper at this bizarre carrier that has never once flown a commercial flight but has somehow yet to officially fold despite regulator scrutiny. ## Baltia wanted to fly between New York and St. Petersburg In 1998, Baltia received permission to fly from JFK to Pulkovo Airport (LED) in St. Petersburg. Thinking it was about to launch revenue-making flights, Baltia deposited money to acquire an ex-Cathay Pacific Boeing 747-200. Despite securing the aircraft, the United States Department of Transportation (DOT) deemed the airline insufficiently funded and revoked its license to operate in the United States. Baltia convinced more investors of its plan to fly between the US and Russia and filed to begin flying between the two countries again. Still needing an aircraft in 2009, Baltia bought a former Pakistan International Airlines Boeing 747-200 and later a Northwest Airlines Boeing 747-200, as reported by Aviation for Aviators. ##### Eastern Airlines: Everything You Need To Know About This Quirky Carrier The North American airline has a bizarre history. Eventually, the Pakistani jumbo jet was scrapped at Sultan Abdul Aziz Shah Airport (SZB) in Malaysia. The Northwest plane suffered the same fate in November 2020 when it was also sold for scrap at Oscoda-Wurtsmith Airport (OSC) in Michigan. ## Baltia rebranded as USGlobal Airways After receiving the backing of the Federal Aviation Authority (FAA) to continue seeking certification, Baltia announced in 2017 that the airline would be rebranding itself as USGlobal Airways. Having given up on flying 747s between New York and Eastern Europe, USGlobal Airways said it was moving its operations to New York Stewart International Airport (SWF) 60 miles up the Hudson from Manhattan, as reported by Key Aero. The carrier decided that long-haul flying certainly was not its ticket to commencing operations and switched strategies. From its hub at SWF, USGlobal said it would become a regional airline offering passengers flights from Stewart to the following airports: - Albany International Airport (ALB) in upstate New York - Baltimore/Washington International Airport (BWI) in Maryland - Long Island MacArthur Airport (ISP) on New York's Long Island - Trenton–Mercer Airport (TTN) in New Jersey Once more changing plans in the last five years, Baltia, or USGlobal Airways, as it was now called, decided to fly from New York Stewart International Airport (SWF) to underserved European cities. This strategy seemed the most realistic to date, as airlines like Norwegian Air Shuttle, WOW, PLAY, and Norse Atlantic have attempted to create route networks on the model. While USGlobal Airways never got its plan off the ground, it did sign a letter of intent in 2017 with Michigan's Kalitta Air to lease a Boeing 767-300ER. However, as one would expect from this story so far, the aircraft has yet to take to the skies for the airline. ## Was it all a scam? Having never flown a single flight in more than 30 years has many people wondering whether the whole thing was a scam. As recently as 2014, the airline had a relatively large market capitalization of $70 million, which certainly was enough to buy some used jets. In 2016, US regulators filed charges against one of the airline's executives for misleading investors, according to One Mile At A Time. After not having filed financial reports for three years, the US Securities and Exchange Commission (SEC) revoked the company's stock, leaving USGlobal Airways with nothing but a name. With such shady business dealings, there have been an array of conspiracy theories behind the carrier's true purpose.
true
true
true
A brief history of the decades-old New York-based startup carrier.
2024-10-13 00:00:00
2022-03-19 00:00:00
https://static1.simplefl…/2024/01/747.jpg
article
simpleflying.com
Simple Flying
null
null
32,543,354
https://en.wikipedia.org/wiki/Hurting_the_feelings_of_the_Chinese_people
Hurting the feelings of the Chinese people - Wikipedia
null
# Hurting the feelings of the Chinese people "**Hurting the feelings of the Chinese people**" (simplified Chinese: 伤害中国人民的感情; traditional Chinese: 傷害中國人民的感情; pinyin: *shānghài Zhōngguó rénmín de gǎnqíng*) is a political catchphrase used by the Ministry of Foreign Affairs of the People's Republic of China, in addition to Chinese state media organisations and Chinese Communist Party–affiliated news outlets such as the *People's Daily*,[1] the *China Daily*[2] and Xinhua News Agency[3] to express dissatisfaction with or condemnation of the words, actions or policies of a person, organisation, or government that are perceived to be of an adversarial nature towards China, through the adoption of an *argumentum ad populum* position against the condemned target.[4][5][6][7] Alternative forms of the catchphrase include "**hurting the feelings of 1.3 billion people**"[3][note 1] (simplified Chinese: 伤害13亿人民感情; traditional Chinese: 傷害13億人民感情) and "**hurting the feelings of the Chinese nation**" (simplified Chinese: 伤害中华民族的感情; traditional Chinese: 傷害中華民族的感情).[8][9] In September 2023, a law was proposed before the National People's Congress that would criminalize comments, clothing, or symbols that hurt the feelings of the Chinese people.[10] ## Origin [edit]The phrase first appeared in 1959 in the People's Daily, where it was used to criticise India during a border dispute.[1] In the decades that followed, the phrase has been regularly used to express displeasure of the Chinese government via its various official communication channels. Targets accused of having "hurt the feelings of the Chinese people" range from national governments and international organisations,[11] to companies such as automakers,[1] newspapers,[12] luxury jewellers,[13] and hotel chains,[14] in addition to outspoken individuals including sportspeople,[15] business executives,[16] film actors,[17] and music performers.[18] Although bureaucratic in origin, ordinary people have also become encouraged to use the expression to display dissatisfaction against criticism targeting China as well.[19][20] ## Phraseology [edit]A study conducted by David Bandurski as part of the China Media Project at the University of Hong Kong selected 143 text samples of the phrase from excerpts from the *People's Daily* published between 1959 and 2015; from this sample, Japan was most frequently accused of "hurting the feelings of the Chinese people" with 51 occurrences, while the United States ranked second at 35 occurrences. In terms of specific issues which drew condemnation through the catchphrase, 28 were in relation to the political status of Taiwan, while the Tibetan sovereignty debate drew condemnation with the phrase 12 times.[4] Bandurski later wrote in 2019 that the phrase appears around at most four times in the *People's Daily* each year, though there was a slight uptick in 2012.[22] A December 2008 *Time* article used an informal statistical survey to analyse the occurrence of the phrase within *People's Daily* publications, pointing out that during the period between 1946 and 2006 there were more than one hundred articles that made accusations against a target that had "hurt the feelings of the Chinese people".[23] Victor H. Mair wrote in 2011 that while the phrase "hurt the feelings of the Chinese people" resulted in 17,000 online hits, rewriting the phrase as "hurt the feelings of the Japanese people" only yields 178 hits, and the same phrase rewritten with 17 other nationalities provides zero hits.[24] In addition, use of the keywords "bullying" (Chinese: 欺负) and "looking down upon" (Chinese: 看不起) results in 623,000 Google hits for "bullying China", and 521,000 hits for "looking down upon Chinese people".[19] Horng-luen Wang (Chinese: 汪宏倫; pinyin: *Wāng Hónglún*), an associate researcher at the Institute of Sociology at Academia Sinica in Taiwan, found that there were 319 instances of "hurt the feelings of the Chinese people" in the *People's Daily* from 1949 to 2013, based on data obtained from the *People's Daily* database.[21] ## Criticism [edit]In August 2016, Merriden Varrall, Director of the East Asia Program at the Lowy Institute in Sydney, Australia, published an opinion piece in *The New York Times* titled "A Chinese Threat to Australian Openness" in which she described a trend where many of the 150,000 Chinese international students in Australia introduce pro-China stances into the classroom while attempting to stifle debate that does not match the official viewpoint of China;[25] later in September 2016, another opinion piece by Varrall in *The Guardian* expressed that while Chinese students in Australia would frequently use the phrase "hurt the feelings of the Chinese people" whenever China received international criticism, such phrasing is only ever used within the context of China and would never be used by those of other nationalities, such as Australians, to condemn criticism of their own countries.[5] A February 2016 piece in *The Economist* commented that the supposed outrage of the people is often utilised as a tool to allow the Chinese Communist Party to abandon its official diplomatic principle of non-interference in the internal affairs of other countries, for instance when China releases official statements claiming that the visits of Japanese politicians to Yasukuni Shrine hurt the feelings of the Chinese people, it is able to express dissatisfaction towards the visits on behalf of the people rather than as an official government statement or position.[26] ## Historical events [edit]### United States [edit]In August 1980, Xinhua News Agency accused US presidential candidate Ronald Reagan of having "deeply hurt the feelings of the 1 billion Chinese people" and "given rise to widespread concern and indignation in China", after Reagan made the suggestion that the United States should open a governmental liaison office in Taiwan.[27] An April 9, 1983 article in the *People's Daily* argued that the United States had "made a whole series of moves that hurt the Chinese people's dignity, feelings, and interests", in reference to US military arms sales to Taiwan, the status of Taiwan in the Asian Development Bank, and the defection of Chinese tennis player Hu Na while in California.[28] Following the 1989 Tiananmen Square protests and massacre, US congressional actions targeting the Chinese government more than doubled, and in response, the Chinese assistant foreign minister expressed to the US ambassador that the new bills "attacked China and interfered in its internal affairs", and that "such activities by the US Congress hurt the feelings of the 1.1 billion Chinese people".[29] After the Hainan Island incident in 2001 where a US Navy signals intelligence aircraft collided with a People's Liberation Army Navy interceptor, Chinese government representatives rejected the United States' request to repair the US Navy aircraft on Chinese soil and have it fly back to base, instead insisting that the plane be dismantled and returned to the US, stating that allowing the US to fly back would "hurt the feelings of the Chinese people".[30] Presidents Bill Clinton,[31] George W. Bush[32] and Barack Obama[33] have all been accused by Chinese foreign ministry spokespersons and foreign ministers of "hurting the feelings of the Chinese people" in relation to their respective meetings with the 14th Dalai Lama. ### Japan [edit]Following Japanese prime minister Yasuhiro Nakasone's visit to Yasukuni Shrine in 1985, the *People's Daily* wrote that the visit "hurt the feelings of both Chinese and Japanese peoples who were victims of Japanese militarism".[34] Prime minister Junichiro Koizumi's regular visits to Yasukuni Shrine from 2001 to 2006 have likewise been criticised by Chinese foreign ministry spokeswoman Zhang Qiyue (Chinese: 章啟月) as having "hurt the feelings of the Chinese people and the people of the majority of victimized countries in Asia",[35] by Chinese foreign minister Li Zhaoxing as having "hurt the feelings of the Chinese people" as he blamed the visits for anti-Japan protests in China,[36][37][38] and by Chinese commerce minister Bo Xilai as "severely hurting the Chinese people's feelings and damaging the political foundation for bilateral ties".[39] On September 15, 2012, after the Japanese government nationalised control over three of the privately owned islands within the Senkaku Islands, the Xinhua News Agency stated that the move "hurt the feelings of 1.3 billion Chinese people".[3] ### Holy See [edit]On October 1, 2000, Pope John Paul II canonised 120 missionaries and adherents who died in China during the Qing Dynasty and Republican era; in response, the *People's Daily* expressed that the move "greatly hurt the feelings of the Chinese race and is a serious provocation to the 1.2 billion people of China".[8] The Chinese Ministry of Foreign Affairs issued a statement stating that the Vatican "seriously hurt the feelings of the Chinese people and the dignity of the Chinese nation".[40] In 2005, the CCP-affiliated Catholic Patriotic Association stated that the attendance of Taiwanese president Chen Shui-bian at Pope John Paul II's funeral "hurt the feelings of the Chinese people, including five million Catholics".[41][42] ### Europe [edit]On September 24, 2007, Chinese foreign ministry spokesperson Jiang Yu expressed that German chancellor Angela Merkel's meeting with the 14th Dalai Lama "hurt the feelings of the Chinese people and seriously undermined China-Germany relations".[43] The 14th Dalai Lama's later meeting with French president Nicolas Sarkozy in December 2008 drew similar criticisms, with the Chinese Ministry of Foreign Affairs releasing a press statement insisting that Sarkozy's actions "constitute gross interference in China's internal affairs and offend the feelings of the Chinese people";[44] Xinhua News Agency likewise condemned Sarkozy's meeting as "not only hurting the feelings of the Chinese people, but also undermining Sino-French relations".[45][46] British prime minister David Cameron's 2012 meeting with the 14th Dalai Lama also received identical accusations of hurt feelings.[47][48] On October 23, 2008, the European Parliament awarded the 2008 Sakharov Prize to social activist Hu Jia. Prior to the announcement, China had put extensive pressure on the European Parliament to prevent Hu Jia from winning the award, with Chinese Ambassador to the European Union Song Zhe writing a warning letter to the President of the European Parliament stating that should Hu Jia receive the prize, it would seriously damage Sino-European relations and "hurt the feelings of the Chinese people".[49][50] ### Canada [edit]On October 29, 2007, Canadian prime minister Stephen Harper met with the 14th Dalai Lama; in response, Chinese foreign ministry spokesman Liu Jianchao stated in a news briefing that the "disgusting conduct has seriously hurt the feelings of the Chinese people and undermined Sino-Canadian relations".[51] Following the arrest of Huawei chief financial officer Meng Wanzhou in December 2018, Xinhua News Agency accused Canada of assisting American hegemonic behaviour, an act which "hurt the feelings of the Chinese people".[52] ### Mexico [edit]On September 9, 2011, Mexican president Felipe Calderón met with the 14th Dalai Lama; on the 10th, Chinese foreign ministry spokesperson Ma Zhaoxu made an official statement stating that China expressed strong dissatisfaction and resolute opposition to the meeting, and that the meeting "hurt the feelings of the Chinese people".[53] ### Hong Kong [edit]On October 13, 2016, the Government of Hong Kong condemned lawmakers Leung Chung-hang and Yau Wai-ching as having "harmed the feelings of our compatriots" in a written statement, following allegations that they intentionally pronounced the word "China" as "Chee-na", the Cantonese pronunciation of the Japanese ethnic slur *Shina*, during their swearing-in ceremony;[54] Xinhua News Agency reported that a representative of the Hong Kong Liaison Office made an official statement condemning the act as "challenging the nation's dignity and severely hurting the feelings of all Chinese people and overseas Chinese, including Hong Kong compatriots".[55] On August 3, 2019, during the 2019–20 Hong Kong protests, an unknown protester lowered the national flag of China at Tsim Sha Tsui and threw it into the sea;[56] the Hong Kong and Macau Affairs Office issued a statement condemning "extremist radicals who have seriously violated the National Flag Law of the People's Republic of China... flagrantly offending the dignity of the country and the nation, wantonly trampling on the baseline of the one country, two systems principle, and greatly hurting the feelings of all Chinese people".[56][2] #### Glory to Hong Kong [edit]In November 2022, after a rugby match in South Korea played Glory to Hong Kong for the Hong Kong team, lawmaker Starry Lee said that Asia Rugby should apologize to "the entire [Chinese] population."[57] In December 2022, security chief Chris Tang appealed to Google to "correct" the search results to list March of the Volunteers instead of Glory to Hong Kong when searching for the national anthem of Hong Kong, and said that the song being the top result hurt the feelings of Hong Kong people.[58] ### Australia [edit]On 26 August 2020, China's deputy ambassador to Australia, Wang Xining (Chinese: 王晰宁[59]), expressed that Australia's co-proposal for an independent investigation into the causes of the COVID-19 pandemic "hurts the feelings of the Chinese people" during his address to the National Press Club of Australia.[60][61] ### Czech Republic [edit]On 31 January 2023, after Czech President-elect Petr Pavel conducted a phone call with Tsai Ing-wen from Taiwan, foreign ministry spokeswoman Mao Ning said that "Pavel... trampled on China's red line" and that "This severely interferes in China's internal affairs and has hurt the feelings of the Chinese people."[62] ## See also [edit]## Notes [edit]## References [edit]- ^ **a****b**"'Hurting the feelings of the Chinese people' is just a way of registering state displeasure".**c***Hong Kong Free Press*. February 16, 2018. Archived from the original on February 16, 2018. Retrieved September 3, 2020. - ^ **a****b**"Tung: Desecration of national flags hurts feelings of 1.4b people".**c***China Daily*. September 24, 2019. Archived from the original on November 18, 2019. - ^ **a****b**日方“购岛”伤害13亿中国人民感情.**c***NetEase*(in Chinese). Jiangxi Daily. September 15, 2012. Archived from the original on February 2, 2014. - ^ **a**Bandurski, David (January 29, 2016). "Hurting the feelings of the "Zhao family"".**b***University of Hong Kong*. China Media Project. Archived from the original on February 4, 2016. - ^ **a**中国留学生“玻璃心”缘何而来?.**b***Deutsche Welle*(in Chinese). September 9, 2017. Archived from the original on October 15, 2017. **^**Bandurski, David (January 29, 2016). "Why so sensitive? A complete history of China's 'hurt feelings'".*Hong Kong Free Press*. Archived from the original on December 9, 2016. Retrieved September 3, 2020.**^**Richburg, Keith B. (February 22, 2018). "China's hard power and hurt feelings".*Nikkei Asian Review*. Archived from the original on April 8, 2018.- ^ **a****b**梵蒂冈“封圣”是对中国人民的严重挑衅.**c***People's Daily*(in Chinese). October 3, 2000. Archived from the original on November 23, 2015. - ^ **a**《人民日報》評論員文章.**b***People's Daily*(in Chinese). October 13, 2000. Archived from the original on September 3, 2020. **^**"Chinese law to ban comments that harm China's 'feelings' prompts concern".*The Guardian*. 2023-09-08. ISSN 0261-3077. Archived from the original on 11 July 2024. Retrieved 2023-09-10.**^**"China says unity at stake over Tibet".*Reuters*. April 12, 2008. Archived from the original on September 3, 2020.**^**"Swedish media calls for action against attacks from Chinese officials".*The Guardian*. January 30, 2020. Archived from the original on January 30, 2020.**^**"After Versace, now Swarovski apologises to China for referring Hong Kong as separate state".*The New Indian Express*. August 13, 2019. Archived from the original on August 13, 2019.**^**"Delta flies into China trouble over Tibet and Taiwan".*CNN*. January 12, 2018. Archived from the original on January 12, 2018.**^**"People in China don't quite know why they are boycotting Arsenal player Mesut Özil".*Quartz*. December 16, 2019. Archived from the original on August 17, 2020.**^**"China state broadcaster hints NBA exec Morey 'paid price' for HK tweet".*Bangkok Post*. October 16, 2020. Archived from the original on October 17, 2020.**^**"Angelina Jolie Hurts the Feelings of the Chinese People".*The Wall Street Journal*. June 10, 2014. Archived from the original on September 28, 2020.**^**"China Hurt by Bjork".*The New York Times*. March 8, 2008. Archived from the original on May 27, 2012.- ^ **a**Varrall, Merriden (September 8, 2017). "'You should consider our feelings': for Chinese students the state is an extension of family".**b***The Guardian*. Archived from the original on September 8, 2017. **^**Langfitt, Frank (April 11, 2001). "In China's view, a matter of face".*The Baltimore Sun*. Archived from the original on December 18, 2020.- ^ **a**汪宏倫 (Horng-luen Wang) (2014). 理解當代中國民族主義:制度、情感結構與認識框架 [Understanding Contemporary Chinese Nationalism: Institutions, Structures of Feeling, and Cognitive Frames] (PDF). 文化研究 (in Chinese) (19): 189–250. Archived (PDF) from the original on December 18, 2020.**b** **^**Bandurski, David (30 April 2019). "Is China dispensing with "hurt feelings"?".*China Media Project*. Retrieved 27 June 2023.**^**"Hurt Feelings? Blame Deng Xiaoping".*Time*. November 11, 2008. Archived from the original on June 4, 2016.**^**Mair, Victor (12 September 2011). "'Hurt(s) the feelings of the Chinese people'".*Language Log*. Retrieved 2023-09-19.**^**Varrall, Merriden (July 31, 2017). "A Chinese Threat to Australian Openness".*The New York Times*. Archived from the original on August 1, 2017.**^**"A world of hurt".*The Economist*. February 4, 2016. Archived from the original on May 7, 2019.**^**"Chinese Reiterate Attack On Reagan's Taiwan Stand".*The Washington Post*. August 23, 1980. Archived from the original on November 21, 2020.**^**Whiting, Allen S. (1983). "Assertive Nationalism in Chinese Foreign Policy".*Asian Survey*.**23**(8): 913–933. doi:10.2307/2644264. JSTOR 2644264.**^**Baggott, Erin (September 8, 2016).*The Influence of Congress upon America's China Policy*(PDF) (Thesis). University of Southern California. p. 21. Archived (PDF) from the original on August 31, 2018.**^**"China insists spy plane must be taken apart".*The Guardian*. May 24, 2001. Archived from the original on May 10, 2014.**^**"Dalai Diplomacy".*Wired*. November 5, 1998. Archived from the original on September 3, 2020.**^**"Bush must cancel meet with Dalai".*Hindustan Times*. October 16, 2007. Archived from the original on September 3, 2020.**^**"Why the US has nothing to fear from China's warnings about the Dalai Lama".*Quartz*. February 21, 2014. Archived from the original on March 2, 2014.**^**Jiang, Wenran (November 6, 1998).*Competing as Potential Superpowers: Japan's China Policy 1978-1998*(PDF).*National Library of Canada*(PhD). Carleton University. p. 152. 0-612-37067-4. Archived (PDF) from the original on November 21, 2020. Retrieved November 21, 2020.**^**"Spokesperson Zhang Qiyue on Japanese Prime Minister Koizumi's Visit to the Yasukuni Shrine".*Ministry of Foreign Affairs of the People's Republic of China*. August 14, 2001. Archived from the original on February 27, 2016.**^**李肇星:日本领导人不应再做伤害中国人民感情的事.*People's Daily*(in Chinese). March 7, 2006. Archived from the original on October 29, 2006.**^**"Koizumi rejects Beijing's claim that Yasukuni trips hurt the Chinese people".*The Japan Times*. April 20, 2005. Archived from the original on January 8, 2019.**^**"FM: Yasukuni Shrine visits hurt feelings".*China Daily*. March 6, 2004. Archived from the original on March 8, 2004.**^**"China's opposition to Yasukuni shrine visit is natural reaction: minister".*People's Daily*. June 2, 2006. Archived from the original on June 29, 2018.**^**中国外交部发表声明强烈抗议梵蒂冈“封圣”.*Ministry of Foreign Affairs of the People's Republic of China*(in Chinese). November 7, 2000. Archived from the original on September 3, 2020.**^**"China boycotts Pope's funeral in anti-Taiwan protest".*Australian Broadcasting Corporation*. April 7, 2005. Archived from the original on October 28, 2016.**^**"Taiwan's Chen at Pope's funeral".*BBC News*. April 8, 2005. Archived from the original on December 13, 2016.**^**"Beijing Furious with Berlin over Dalai Lama Visit".*Der Spiegel*. September 25, 2007. Archived from the original on March 17, 2016.**^**"Comme prévu, la Chine est très fâchée contre la France".*Libération*(in French). December 7, 2008. Archived from the original on September 29, 2020.**^**"Rencontre Sarkozy-Dalaï Lama: la colère de Pékin".*Radio France Internationale*(in French). December 7, 2008. Archived from the original on August 3, 2020.**^**"La Chine tance Paris après l'entretien entre le dalaï lama et Nicolas Sarkozy".*Le Monde*(in French). December 7, 2008. Archived from the original on January 4, 2009.**^**"David Cameron's Dalai Lama meeting sparks Chinese protest".*BBC News*. May 16, 2012. Archived from the original on January 1, 2015.**^**"China cancels UK visit over David Cameron's meeting with Dalai Lama".*The Guardian*. May 25, 2012. Archived from the original on September 9, 2013.**^**"Sakharov Prize 2008 awarded to Hu Jia".*European Parliament*. Archived from the original on October 26, 2008.**^**欧洲议会授予胡佳人权奖.*The Wall Street Journal*(in Chinese). October 23, 2008. Archived from the original on January 19, 2009.**^**"Canada's behaviour disgusting, says Beijing".*The Sydney Morning Herald*. October 31, 2007. Archived from the original on November 21, 2020.**^**"Beijing threatens Canada with 'grave consequences for hurting feelings of Chinese people'".*Politico*. December 9, 2018. Archived from the original on December 9, 2018.**^**外交部发言人马朝旭就墨西哥总统卡尔德龙会见达赖事发表谈话.*Ministry of Foreign Affairs of the People's Republic of China*(in Chinese). September 10, 2011. Archived from the original on September 30, 2011.**^**"Hong Kong government accuses localist lawmakers of hurting feelings of Chinese with 'offensive' oath-taking".*South China Morning Post*. October 13, 2016. Archived from the original on December 4, 2016.**^**"Senior Beijing official in Hong Kong expresses 'condemnation' over localist lawmakers' oaths".*South China Morning Post*. October 14, 2016. Archived from the original on September 15, 2017.- ^ **a**【旺角遊行】港澳辦、中聯辦譴責:極端激進分子侮辱國旗必須嚴懲.**b***HK01*香港01 (in Chinese). August 4, 2019. Archived from the original on August 26, 2020. Retrieved September 3, 2020. **^**Ho, Kelly (2022-11-14). "National security police should investigate anthem error at rugby match, Hong Kong lawmakers say".*Hong Kong Free Press*. Retrieved 2022-11-14.**^**"Chris Tang vows to fix Google's HK anthem results - RTHK".*RTHK*. Retrieved 2022-12-13.**^**"中国公使王晰宁:澳洲缺乏礼貌 伤害了中国人民的感情".*Australian Broadcasting Corporation*(in Chinese). August 26, 2020. Archived from the original on November 9, 2020.**^**"Australia 'hurt the feelings' of China with calls for coronavirus investigation, senior diplomat says".*Australian Broadcasting Corporation*. August 26, 2020. Archived from the original on September 28, 2020.**^**"Coronavirus inquiry 'unfair': Chinese diplomat".*The Australian*. August 26, 2020. Archived from the original on September 29, 2020.**^**"Beijing angered by new Czech president's Taiwan call".*Hong Kong Free Press*. 2023-01-31. Retrieved 2023-01-31.
true
true
true
null
2024-10-13 00:00:00
2020-09-03 00:00:00
https://upload.wikimedia…hinaChaoyang.JPG
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
5,103,018
http://www.pcmag.com/article2/0,2817,2414561,00.asp
Intel to Shutter Motherboard Business, Reallocate Resources
Joel Santo Domingo
Intel today announced that it will shutter its long-standing retail desktop motherboard business after the imminent rollout of 4th generation Intel Core processors (aka Haswell). Those resources will be reallocated to other forward-looking product teams like the one that recently developed Intel's Next Unit of Computing (NUC) and other groups working on ultrabooks and all-in-one desktops. Intel's Desktop Motherboard group is responsible for bringing retail-level motherboards and motherboard kits to the market, for use by do-it-yourselfers as well as boutique PC system builders. While desktop motherboards for the end user have been an Intel staple for the past 20 years, other motherboard manufacturers like MSI, Asus, Sapphire, ASRock, and Gigabyte offer many more choices in interfaces, form factors, and added features. Intel (the corporation) will still produce the motherboard chipsets that are found on these motherboards, but the end consumer will no longer be able to buy an Intel-branded motherboard after the Haswell board life cycle ends in 18 months to two years. Warranty and driver support for upcoming motherboards will continue for their respective warranty periods. That said, the Intel desktop motherboards supporting Haswell evidently will be the last batch of retail-level desktop motherboards from the company. Intel won't be shutting out the DIY PC guy entirely: you'll still be able to buy NUC kits and standalone NUC boards for your own projects. The product lines that are "going away" include all ATX motherboards, including Mini-ATX and Micro ATX models. Product lines that will benefit from the shifting of resources include the FFRD (Form Factor Reference Design) group, the folks responsible for ultrabooks and new crops of all-in-one desktops. Intel's CPU lines will continue, including LGA 2011, LGA 1155/1150, and BGA for entry-level platforms. "Intel's roadmap includes 227 desktop [CPU] SKUs at 34 different price points, offering desktop solutions for a wide range of customers" said Intel spokesman Dan Snyder. What does this mean in the long term? Well, if you're a DIY PC guy, system builder, or IT pro that builds desktops for your business with Intel branded motherboards, you'll need to find another motherboard supplier, of which there are legion. If you're a desktop PC buyer, carry on as usual: this change won't affect you all that much, if at all. For more, see Intel's Rough 2012 Points to More Challenges Ahead, as well as PCMag's 2012 year in review for Intel. ### Get Our Best Stories! Sign up for **What's New Now** to get our top stories delivered to your inbox every morning. This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time. Thanks for signing up! Your subscription has been confirmed. Keep an eye on your inbox! Sign up for other newsletters
true
true
true
Intel today announced that it will shutter its long-standing retail desktop motherboard business after the imminent rollout of 4th generation Intel Core processors (aka Haswell).
2024-10-13 00:00:00
2013-01-22 00:00:00
https://i.pcmag.com/imag….v1569491140.jpg
article
pcmag.com
PCMag
null
null
19,412,474
https://edri.org/join-the-ultimate-action-week-against-article-13/
Join the ultimate Action Week against Article 13 - European Digital Rights (EDRi)
null
# Join the ultimate Action Week against Article 13 The final vote on the Copyright Directive in the European Parliament plenary will take place on 26 March. A key piece raising concerns in the proposal is Article 13. It contains a change of platforms’ responsibility that will imminently lead to the implementation of upload filters on a vast number of internet platforms. The proposed text of Article 13 on which the Parliament will be voting is the worst we have seen so far. Public outcry around Article 13 reached a historical peak with almost five million individuals signing a petition against it, and thousands calling, tweeting and emailing their Members of the European Parliament (MEPs). Despite the scale of the protests, legislators fail to address the problems and remove upload filters from the proposal. Join the Action Week (20 March – 27 March) organised by the free internet community and spread the word about the #SaveYourInternet movement! Send Members of the European Parliament a strong message: “Side with citizens and say NO to upload filters! **NOW – Get active!** Kickstart the action week! Did you get your MEP to pledge opposition to the “Censorship Machine” during the plenary vote ? Did you reach out to a national news outlet to explain them why this is bad for the EU? Did you tell your best mate your meme game may be about to end? If you answered “No” to any of those questions… NOW IS THE TIME TO ACT. **21** March – Internet blackout day March – Internet blackout day Several websites are planning to shut down on this day. Wikimedia Germany is one of them. Is your website potentially hosting copyrighted content, and therefore affected by the upcoming copyright upload filter? Join the protest! #Blackout21 **23 March – Protests all over Europe** Thousands have marched the streets in the past weeks. The protests were not lastly influenced by European Commission’s allegations of the #SaveYourInternet movement as a bots-driven one, purposely misleading communication from the EU Parliament, and the attempted rushing of the final vote weeks before originally scheduled. 23 March will be the general protest day – see a map here. Commit to EU’s core democratic values and show what positive citizens’ engagement looks like! #**Article13Demo** #**Artikel13Demo** **19 to 27 March – Activists travel to meet their MEPs** We have launched a travel grant for activists willing to travel to Strasbourg and Brussels in order to discuss with their representatives. Do you want to take part in our final effort to get rid of mandatory upload filters? Join us! The deadline to apply is Friday 15 March. **#SYIOnTour** It is very important that we connect with our MEPs and make our concerns heard **every day** of the Action Week. Whether you can travel or make phone calls to get in touch with your representatives, or grow awareness in your local community – it **all makes a huge difference**. Build on the voices of internet luminaries, the UN Special Rapporteur on Freedom of Expression, civil society organisations, programmers, and academics who spoke against Article 13! We need the stop the censorship machine and work together in order to create a better European Union! You can count on us! Can we count on you? ### Read more Save Your Internet Campaign website https://saveyourinternet.eu/ Pledge 2019 Campaign Website https://pledge2019.eu/en Upload Filters: history and next steps (20.02.2019) https://edri.org/upload-filters-status-of-the-copyright-discussions-and-next-steps
true
true
true
The final vote on the Copyright Directive in the European Parliament plenary will take place on 26 March. A key piece raising concerns in the proposal is Article 13. It contains a change of platforms’ responsibility that will imminently lead to the implementation of upload filters on a vast number of internet platforms. The proposed […]
2024-10-13 00:00:00
2020-08-21 00:00:00
https://edri.org/wp-cont…eek-1024x576.png
article
edri.org
European Digital Rights (EDRi)
null
null
7,492,660
http://meowbit.com/press-release/
Account Suspended
null
Account Suspended This Account has been suspended. Contact your hosting provider for more information.
true
true
true
null
2024-10-13 00:00:00
null
null
null
null
null
null
null
18,139,327
https://path.com/about
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,539,952
https://www.psychologytoday.com/blog/the-athletes-way/201701/fake-news-vaccine-inoculates-against-alternative-facts
Fake News 'Vaccine' Inoculates Against 'Alternative Facts'
Christopher Bergland
###### Environment # Fake News 'Vaccine' Inoculates Against 'Alternative Facts' ## New 'inoculation theory' could protect the masses from epidemics of fake news. Posted January 22, 2017 We live in a social media era in which the epidemic of ‘fake news’ and ‘alternative facts’ go viral far too often. Fortunately, an international team of social psychologists has pinpointed simple ways that the general public can be ‘vaccinated’ against the virus of calculated misinformation campaigns. The new groundbreaking report, ”Inoculating the Public against Misinformation about Climate Change,” was published today in the open-access journal *Global Challenges.* For this study, researchers at Yale University, the University of Cambridge, and George Mason University investigated how the general public can most effectively be inoculated against strategic misinformation efforts designed to portray climate change as a hoax. To unearth novel ways to create a ‘vaccine’ against fake news regarding climate change, the researchers exposed participants to polarizing climate-change statements using a cohort of 2,167 men and women from across the United States. The demographic of study participants covered a broad spectrum of age, education, and political parties. The main goal of the study was to compare participants reactions to climate change reports based on scientific facts with those of widespread misinformation websites that rely on hyperbole and falsehoods. The study reaffirmed the power of fake news: When presented back-to-back in immediate succession, the libelous material on 'fake news' websites completely negated the accurate scientific findings in people's minds. Their opinions ended up right back where they had started in terms of being confused about what to believe about climate change. Prior to this study, the researchers scoured the internet to find the most effective climate change misinformation campaign currently influencing public opinion in the United States. Top honors for spreading provable falsehoods on climate change went to the Oregon Global Warming Petition Project. This website claims: “31,487 American scientists have signed this petition, including 9,029 with PhDs stating there is no evidence that man-made carbon dioxide emissions will cause climate change. These scientists are convinced that the human-caused global warming hypothesis is without scientific validity and that government action on the basis of this hypothesis would unnecessarily and counterproductively damage both human prosperity and the natural environment of the Earth.” According to a statement by Sander van der Linden, a social psychologist from the University of Cambridge and Director of the Cambridge Social Decision-Making Lab who led this research, "Misinformation can be sticky, spreading and replicating like a virus. We wanted to see if we could find a 'vaccine' by pre-emptively exposing people to a small amount of the type of misinformation they might experience. A warning that helps preserve the facts. The idea is to provide a cognitive repertoire that helps build up resistance to misinformation, so the next time people come across it they are less susceptible. It's uncomfortable to think that misinformation is so potent in our society. A lot of people's attitudes toward climate change aren't very firm. They are aware there is a debate going on, but aren't necessarily sure what to believe. Conflicting messages can leave them feeling back at square one." The researchers found that the most effective way to inoculate someone to potential misinformation was to take a two-pronged 'vaccination' approach: First, the *general inoculation *consisted of a warning: "Some politically-motivated groups use misleading tactics to try and convince the public that there is a lot of disagreement among scientists." Second, another *detailed inoculation* picked apart the Oregon petition based on specifics. For example, by highlighting that many of the supposed signatories are fraudulent, such as Charles Darwin and members of the Spice Girls. Also pointing out that less than 1 percent of signatories actually had backgrounds in climate science. The first phase of general inoculation saw an average opinion shift of 6.5 percentage points towards acceptance of the climate science consensus, despite exposure to fake news. But when the second, more detailed inoculation was added to the first, the opinion shift jumped almost 13 percentage points, despite exposure to the falsehoods of Oregon petition fake news. The researchers also analyzed their findings through the lens of someone's political party affiliation. Interestingly, prior to any type of inoculation, fake news on climate change negated the factual-based scientific findings equally for both Democrats and Independents. However, for Republicans, the fake news on climate change overrode the science-based facts by 9 percentage points. The good news is that following inoculation, the positive effects of the accurate information were preserved across all political parties equally. Van der Linden concluded, "We found that inoculation messages were equally effective in shifting the opinions of Republicans, Independents and Democrats in a direction consistent with the conclusions of climate science. What's striking is that, on average, we found no backfire effect to inoculation messages among groups predisposed to reject climate science, they didn't seem to retreat into conspiracy theories...There will always be people completely resistant to change, but we tend to find there is room for most people to change their minds, even just a little." The researchers point out that, historically, tobacco and fossil fuel companies have used psychological inoculation to plant seeds of doubt about science-based findings and to undermine faith in a scientific consensus in the public consciousness. They believe their latest study provides empirical evidence that suggests psychological inoculation techniques can be utilized to promote scientific discoveries and fact-based empirical evidence that promote public health and well-being by inoculating against misinformation campaigns. The researchers conclude that pre-emptively warning people about political and profit-motivated agendas to spread misinformation on climate change may help to promote and protect public attitudes about the resounding scientific consensus through a type of psychological inoculation. **“Alternative facts are not facts. They are falsehoods.” — Chuck Todd** This evening, as I was writing this *Psychology Today* blog post on how to inoculate the public against fake news, my Facebook page and Twitter feed were exploding with stories about 'alternative facts' and attacks on the 'dishonest media' that have erupted over the weekend, following Donald Trump's inauguration on Friday. The latest 'inoculation theory' by van der Linden and colleagues provides a type of vaccination against fake news or various scenarios of propaganda and conflicting information on a highly politicized subject such as climate change ... or how many people attended Donald Trump’s inauguration. As the *New York Times* reported yesterday, “With False Claims, Trump Attacks Media on Turnout and Intelligence Rift.” Without getting into the nitty-gritty of this highly politicized issue, I wanted to provide some potential ‘inoculation’ for you—right here and now—regarding this potential ‘fake news’ story by sharing the links to three full-length interviews and other 'legitimate news' stories so you can 'vaccinate' yourself from any potential misinformation by watching all of the materials with your own eyes in their entirety. The first link is to President Trump's full speech at CIA headquarters yesterday where he said “I have a running war with the media. They are among the most dishonest human beings on Earth.” Regarding the crowd size at his inauguration, Trump also said, "We had a massive field of people. You saw that. Packed. I get up this morning. I turn on one of the networks and they show an empty field. I say: “wait a minute. I made a speech. I looked out. The field was…. It looked like a million, a million and a half people.” The second is a video of White House Press Secretary Sean Spicer holding his first press conference yesterday in which he delivered a statement blasting the media for allegedly underestimating and 'false reporting' on the size of the crowds at President Trump's inaugural ceremony. Lastly, is the full interview between Kellyanne Conway and Chuck Todd from Meet the Press this morning that degenerated into a heated exchange about Spicer trying to litigate provable falsehoods in his first press conference. Conway said that Spicer was just presenting “alternative facts.” Chuck Todd responded by saying, “Alternative facts are not facts. They are falsehoods.” This afternoon, the *Washington Post* summed up this exchange in an article, “How Kellyanne Conway Ushered in the era of alternative facts.” Once you go down the rabbit hole of believing alternative facts or fake news, it’s easy to feel like a character in *Alice in Wonderland *peering through the looking-glass into a surreal world where the line between truth and fiction is constantly blurred. Lewis Carroll sums up the conundrum of Alice living in a fact-averse parallel universe writing, “Alice laughed: "There's no use trying," she said; "one can't believe impossible things." "I daresay you haven't had much practice," said the Queen. "When I was younger, I always did it for half an hour a day. Why, sometimes I've believed as many as six impossible things before breakfast." "If I had a world of my own, everything would be nonsense. Nothing would be what it is, because everything would be what it isn't. And contrary wise, what is, it wouldn't be. And what it wouldn't be, it would. You see?” Carroll concludes, "Imagination is the only weapon in the war against reality." That being said, hopefully, by having this new empirical evidence from van der Linden et al. on how to inoculate oneself against fake news, each of us can avoid catching the misinformation bug and survive the "war on reality" in the months and years ahead. References Sander van der Linden, Anthony Leiserowitz, Seth Rosenthal, and Edward Maibac, Inoculating the Public against Misinformation about Climate Change. *Global Challenges*. DOI: 10.1002/gch2.201600008
true
true
true
Fake news has become an epidemic that often goes viral. Luckily, an international team of social psychologists recently identified a simple way to inoculate against fake news.
2024-10-13 00:00:00
2017-01-22 00:00:00
https://cdn2.psychologyt…pg?itok=4e1Ss-m_
article
psychologytoday.com
Psychology Today
null
null