Title of Competition
stringclasses 168
values | Title of Writeup
stringlengths 9
139
| User
stringclasses 1
value | Writeup
stringlengths 419
61.5k
|
---|---|---|---|
ASHRAE - Great Energy Predictor III | Thanks and 52nd place solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I created this topic not for reveal a top secret solution. It's basically a "thank you" to the people who create public kernels. But about my solution I of course will tell.</p>
<p>I didn't have real expirence. And ideas about data preparation i took from public kernels. I tryied to built many different models using: LGBM, CatBoost, XGBoost, Regressions etc. But their scores were not so good as best public solutions. Then came the leakege and I tangled: what can i do with leakege. Summing up this stage of my work - I learned very much about visualization, data preparation, modeling and cross-validation.</p>
<p>Then I tried to more usage public kernels. And my first selected submission was blend of publics. Submission public score 0.952, private 1.286. And with this solution i would get 493 place.
My second solution was stacking (LightGBM) of four models. All models for ensemble was made by using LightGBM. On this way i made many mistakes. And very often saw low public score of my submissions. But in the end got public score 0.959, private 1.25001, and 52 place.</p>
<p>It was hard for me, but i learned very much during this competition. And first of all I learned a lot thanks to public kernels.
I thank everyone who was my teachers (even if they didn't know it:) during this competition:
<a href="/rohanrao">@rohanrao</a> <a href="/purist1024">@purist1024</a> <a href="/yamsam">@yamsam</a> <a href="/gunesevitan">@gunesevitan</a> <a href="/nroman">@nroman</a> <a href="/iwatatakuya">@iwatatakuya</a> <a href="/ragnar123">@ragnar123</a> <a href="/khoongweihao">@khoongweihao</a> </p> |
ASHRAE - Great Energy Predictor III | 13th place gold solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to ASHRAE for providing this unclean dataset! Thanks to Kaggle for continuing and keeping the competition alive. Congrats to everyone who managed to survive the LB shakeup!</p>
<p>Thanks to <a href="/rohanrao">@rohanrao</a>, <a href="/kailex">@kailex</a>, <a href="/nz0722">@nz0722</a>, <a href="/aitude">@aitude</a>, <a href="/purist1024">@purist1024</a> for your excellent notebooks which had direct impact on helping me achieve this outcome.</p>
<h2>Summary</h2>
<p>Final 2 submissions: equal weighted blend of the following plus some regularization.
1. <a href="https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks">https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks</a>
2. <a href="https://www.kaggle.com/rohanrao/ashrae-half-and-half">https://www.kaggle.com/rohanrao/ashrae-half-and-half</a> (credit for original work <a href="https://www.kaggle.com/kailex/ac-dc">https://www.kaggle.com/kailex/ac-dc</a>)
3. <a href="https://www.kaggle.com/aitude/ashrae-kfold-lightgbm-without-leak-1-08">https://www.kaggle.com/aitude/ashrae-kfold-lightgbm-without-leak-1-08</a>
4. <a href="https://www.kaggle.com/nz0722/aligned-timestamp-lgbm-by-meter-type">https://www.kaggle.com/nz0722/aligned-timestamp-lgbm-by-meter-type</a></p>
<p>public LB: 0.935 (1.032 w/o leak estimated) with aggressive regularization 0.80 and 0.91
public LB: 0.944 (1.039 w/o leak estimated) with conservative regularization 0.91 for all
public LB: 0.950 (1.045 w/o leak) no tricks - I did not select for submission</p>
<h3>What worked</h3>
<ol>
<li>Data Cleaning - garbage in, garbage out. This is probably the single most important aspect of the the competition. I did this manually by plotting <a href="https://www.kaggle.com/ganfear/missing-data-and-zeros-visualized">heatmaps</a> and also going into each building's meter to inspect the target meter readings if they looked reasonable. I also reverse engineered the heat map to show only zeros by added the following line: <br> <code>train_df = train_df.query('not (meter_reading != 0)')</code></li>
<li>Regularization - (this is what I'm calling it, maybe some will call it postprocessing, coefficients, tricks, etc.) - multiplying by some value < 1.0. For aggressive regularization, I used two different values 0.80 for responsive meters and 0.91 for less responsive meters. I probed each site's meter individually by multiplying by 0.95 to start then as I went through all the meters, I noted which meter or meters dropped the public LB by 0.001, which I noted to be responsive to regularization). I got the idea from <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324">LANL Earthquake prediction</a>. Fortunately, I was confident that the LB would not shake me down if I tried to overfit the LB this way. I ran some tests on the leaked sites 0,1,2,4,15 to test each site's meters effect to various values. 0.90-0.95 seemed fairly safe values to start out with when I began probing. I gained about 0.015 on LB through a series of 29 submissions, which took about 2 weeks. As my remaining submissions diminished, I ramped up the overall aggressiveness of regularization and began probing the responsive site's meters as a collective submission because there wasn't enough submissions to try all possible values. This gave between 0.004 - 0.005 reduction on Private LB score. Without any regularization, I would have ended up placing 35 on private LB, so this trick definitely gave my scores the extra kick to finish in gold.</li>
<li>Feature Engineering - In the half-and-half model, I added a feature that grouped <code>building_id</code>, <code>meter</code>, <code>weekday</code>, <code>hour</code> and mean target encoded it using full train (after data cleaning). I got the idea from this <a href="https://www.kaggle.com/mlisovyi/no-ml-benchmark">unassuming kernel</a>. In Kfold-lightgbm-without-leak-1-08 model, I added a feature that combined site and meter as a categorical with no mean encoding. I noticed that for some sites, the mean encoded <code>bm_week_hour</code> feature performed worse while others performed better, but overall, it seemed favorable.</li>
<li>Validating using sites 0,1,2,4,15 - using actual test ground truths for various sites individually and together helped to monitor whether my experiments improved out of sample test data.</li>
<li>Addition by Subtraction - removing certain features for certain models helped improve ground truth (GT) test validation as well as local cross validation (CV). For half-and-half model variant, I dropped <code>site_id</code>, <code>sea_level_pressure</code>, and <code>precip_depth_1_hr</code>. For kfold-lightgbm-without-leak-1-08 I dropped <code>site_id</code>, <code>sea_level_pressure</code>, <code>wind_direction</code>, <code>wind_speed</code>, <code>year_built</code>, <code>floor_count</code>. I removed holiday features for all existing models because they made CV and GT validation worse. For aligned-timestamp-lgbm-by-meter-type, I dropped all the lag3 features.</li>
</ol>
<p>Notes: very little hyper-parameter optimization was performed. Just very limited basic tuning.</p>
<h3>What didn't work</h3>
<ol>
<li>Training using leaked test GT labels - training with GT from 2017-2018 did not improve out of sample <code>site_id</code>. For ex: training with <code>site_id</code> 0 did not improve validation scores for sites 1,2,4,15 dramatically. I only performed that one test and realized that adding a <code>site_id</code> to testing doesn't improve LB scores for out of site validation because each <code>site_id</code> is it's own microcosm and behaves different from other sites.</li>
<li>A lot of feature engineering did not work including weather features.</li>
<li>CatBoost/XGBoost/NN Embeddings/Linear Regression/Kernel Ridge Regression(KRR)/KNN all performed worse. I tried building 1449 (1 for each building) and 2380 (1 for each building's meter) Linear Regression, KRR, and LGBM models. I tried blending with Catboost and NN Embeddings models separately, but GT validation didn't seem to improve.</li>
<li>Training with full year of training data and validating using sites 0,1,2,4,15 didn't seem to help much.</li>
<li>Correcting site 0 meter 0 didn't help - <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/119261">discussion</a></li>
</ol>
<h3>Some experiments I tried</h3>
<ol>
<li>Using half-and-half methodology to split train into 2 separate halves - middle months (4-9)/ending months (1-3, 10-12) → performed worse. First half of the day 0-11, later half of the day 12-23. This split by hour in the day performed surprisingly well and GT validation had shown this, however I didn't select it for final submission blend. <a href="https://www.kaggle.com/teeyee314/best-single-lgbm-lb-1-08-morning-evening">notebook</a></li>
<li>L2 model - I stacked a model on top of individual model predictions (for the same folds), which only worked for half-and-half model. I left this out of final blend. I also tried ensembling through stacking final models using LGBM but there didn't seem to be any improvement in GT validation.</li>
</ol>
<h3>What I didn't try</h3>
<ol>
<li><a href="https://www.kaggle.com/rohanrao/ashrae-divide-and-conquer">Divide and Conquer notebook</a> - I didn't bother with this notebook at all so I can't tell whether it was any good. Looking back now, I should have at least tried playing around with it. It is possible it could have helped reduce variance like <a href="/rohanrao">@rohanrao</a> mentioned.</li>
<li>Different blending methods, different weights, etc. - I just kept it simple (introduced no additional complexity or bias)</li>
<li>Making submission with certain models in final blend due to limited submissions. Namely I wanted to submit my blend with NN Embeddings, but based on GT validation, I filtered out a lot of potentially better performing blends. It is possible that sites 0,1,2,4,15 did not represent other sites well enough, but that was the trade off I had to make. </li>
</ol>
<p>notes: I mention these, because it may have been the difference for finishing closer in-the-money. </p>
<h3>What I learned</h3>
<ol>
<li>I took my mean target encoding game to a new level with this competition. Before this, I was stuck at basic single categorical feature mean target encoding. Below is the code I used to do multi categorical feature mean encoding:
> bm_cols = ['building_id', 'meter', 'weekday', 'hour',]
df_train['hour'] = df_train['timestamp'].dt.hour
df_train['weekday'] = df_train['timestamp'].dt.weekday
bm = df_train.groupby(bm_cols)['meter_reading'].mean().rename('bm_week_hour').to_frame()
df_train = df_train.merge(bm, right_index=True, left_on=bm_cols, how='left')</li>
</ol>
<h3>Final submissions used for blending</h3>
<p><a href="https://www.kaggle.com/teeyee314/best-single-lgbm-lb-1-08">half-and-half variant</a>
<a href="https://www.kaggle.com/teeyee314/kfold-lightgbm">kfold-lightgbm-without-leak-1-08 variant</a>
<a href="https://www.kaggle.com/teeyee314/aligned-timestamp-lgbm-by-meter-type-1-09">aligned-timestamp-lgbm-by-meter-type variant</a></p>
<h3>Final thoughts</h3>
<p>It took me a year to finally reach a gold model. I was an absolute beginner when I started and I've learned a lot since joining Kaggle. I hope to continue competing and learning from all the bright and inspiring people on here. Although the kernels I blended with are all public, there were a lot of details that led to obtaining a gold model. Most notably, stable validation and logging each experiment meticulously so that after a few weeks or near the end, I wouldn't forget what led to improvements and what did not.</p> |
ASHRAE - Great Energy Predictor III | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our team finished 2nd on private LB (12th on public). The private LB is finally released officially.</p>
<p>So happy and pumped up for winning in the money for the first time on Kaggle (after 6 years). Santa has been kind this year :-)
Our team will be kind too and share our complete solution.</p>
<h2>Solution Architecture:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F113389%2F963f058c2cda6869d3684d85e9215e1d%2FScreenshot%202019-12-28%2017.31.15.png?generation=1577534626938889&alt=media" alt=""></p>
<p><strong>XGB:</strong> XGBoost
<strong>LGBM:</strong> LightGBM
<strong>CB:</strong> Catboost
<strong>FFNN:</strong> Feed-forward Neural Network</p>
<h2>Short version</h2>
<ul>
<li><strong>Remove noise (Very important)</strong></li>
<li>Very few and basic features (For stability)</li>
<li>Optimize models for each site+meter (For site-specific patterns)</li>
<li>Ensemble of XGBoost, LightGBM, CatBoost, NeuralNetwork (To reduce variance)</li>
<li><strong>Postprocessing (Very critical)</strong></li>
<li>Leak insertion (Sucks, but probably doesn't matter)</li>
</ul>
<p><strong>Final Ensemble (approximate):</strong> 30% XGB-bagging + 50% LGBM-bagging + 15% CB-bagging + 5% FFNN</p>
<p>Many variations of XGB, LGBM, CB were bagged: at site+meter level, at building+meter level, at building-type+meter level. Bagged XGB gave the best results among the boosting methods.</p>
<p>FFNN was used only for meter = 0.
It gave very poor results for other meters and didn't add value to ensemble.
Also, FFNN was very poor for site-14 so we didn't use it and hence that tile is missing from the models section in the architecture diagram :-)</p>
<p>Our solution was built heavily on <a href="/oleg90777">@oleg90777</a> 's base XGB/LGBM setup (which scores 1.04 on LB without leak) and our key points were cleaning the data and post-processing the predictions (validated on leaked data and LB). Read more about it <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/123528">here</a>.</p>
<p>The final ensemble scores almost the best on public LB, on leaked data as well as private LB, so hopefully it is robust and useful.</p>
<h2>Long version</h2>
<h3>Pre-Processing</h3>
<p>A lot of the low values of the target variable seem to be noise (as discussed multiple times in the forums, specifically for site-0) and removing these rows from the training data gives a good boost in score which has been done by several other competitors too.</p>
<p>It was the most time consuming task as we visualized and wrote code to remove these rows for each of the 1449 buildings manually. We could have used a set of heuristics but that is not optimal due to some edge cases so we just decided to spend few minutes on every building and remove the outliers.</p>
<h3>Feature Engineering</h3>
<p>Due to the size of the dataset and difficulty in setting up a robust validation framework, we did not focus much on feature engineering, fearing it might not extrapolate cleanly to the test data. Instead we chose to ensemble as many different models as possible to capture more information and help the predictions to be stable across years.</p>
<p>Our models barely use any lag features or complex features. We have less than 30 features in our best single model. This was one of the major decisions taken at the beginning of our work. From past experience it is tricky to build good features without a reliable validation framework.</p>
<h3>Modelling</h3>
<p>We bagged a bunch of boosting models XGB, LGBM, CB at various levels of data: Models for every site+meter, models for every building+meter, models for every building-type+meter and models using entire train data. It was very useful to build a separate model for each site so that the model could capture site-specific patterns and each site could be fitted with a different parameter set suitable for it. It also automatically solved for issues like timestamp alignment and feature measurement scale being different across sites so we didn't have to solve for them separately.</p>
<p>Ensembling models at different levels were useful to improve score. Just bagging with different seeds didn't help much.</p>
<p>Site-level FFNN was used only for meter = 0. Each site had a different NN architecture.
It gave very poor results for other meters and didn't add value to ensemble.
Also, FFNN was very poor for site-14 so we didn't use it and hence that tile is missing from the models section in the architecture diagram :-)</p>
<p>For tuning of all models, we used a combination of 4-fold and 5-fold CV on month from training data as well as validation on leaked data.</p>
<h3>Post-Processing</h3>
<p>We have shared our post-processing experiments in another thread: <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/123528">https://www.kaggle.com/c/ashrae-energy-prediction/discussion/123528</a></p>
<p>Since we remove a lot of low value observations from training data, it artificially increases the mean of the target variable and hence the model's raw predictions on test data also has an inflated mean. Since RMSE is optimal at true mean value, reducing the mean of predictions of test data by a reducing factor helps bring it down to its true mean, thus improving score.</p>
<p>We tried a range of post-processing values and finally ended up using 0.8 - 0.85 for most models.</p>
<h3>Ensembling</h3>
<p>Our best single type model was XGB but LGBM was very close and CB was not very bad either. All scored in the range of 1.04 - 1.06 on the public LB without leak.</p>
<p>Since FFNN was built only for meter = 0, we ensembled differently for every site+meter combination using a weighted average where the weights were determined using a combination of CV score, LB score, Leak score and intuition.</p>
<p><strong>Final Ensemble (approximate) for meter = 0:</strong> 30% XGB-bagging + 50% LGBM-bagging + 15% CB-bagging + 5% FFNN
<strong>Final Ensemble (approximate) for meters 1, 2, 3:</strong> 30% XGB-bagging + 50% LGBM-bagging + 20% CB-bagging</p>
<p>The final ensemble scores almost the best on public LB, on leaked data as well as private LB, so hopefully it is robust and useful.</p>
<h3>Leak</h3>
<p>We used leak data primarily for local validation and for inserting into the test data as many competitors did. We didn't use any leaks outside of sites 0, 1, 2, 4, 15.</p>
<p>Since our core models were at site+meter level, we didn't explore leveraging the leaked data as additional train data.</p>
<h3>Team</h3>
<p>Shout out to my <strong>cHaOs</strong> team-mates <a href="/oleg90777">@oleg90777</a> (one of the best team leaders I've worked with), <a href="/berserker408">@berserker408</a> and <a href="/isanton">@isanton</a>.</p>
<h3>Code</h3>
<p>We will be happy to share our entire code if ASHRAE / Kaggle can confirm if we can. No timeline / commitment on this.</p>
<h3>Credits</h3>
<ul>
<li>ASHRAE and Kaggle for hosting this competition.</li>
<li>Competitors who scraped data and made it public. You are winners too.</li>
<li>Kaggle admins for working hard to make the best out of the leak situation.</li>
</ul> |
NFL Big Data Bowl | Public 17th place overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[Data standardization]</p>
<p><code>
df['S'] = df['Dis']*10
</code></p>
<p>Host says that:</p>
<p><code>
Dis measures time covered in the most recent window of player tracking data.
Given that tracking data roughly covers 10 frames per second,
Dis corresponds to distance traveled in the recent 0.1 seconds.
Note that speed and acceleration are directly calculated using Dis (this is done in the data pre-processing)
</code></p>
<p>[Features]
I created about 300 features.
・N seconds later minimum distance between rusher and other players
・N seconds later minimum (distance/S) between rusher and other players
・N seconds later how many players around rusher(square, circle)
...etc</p>
<p>```</p>
<h1>N seconds later player's position</h1>
<p>df['X_after_N_seconds'] = df['X'] + df['S'] * np.cos(np.deg2rad(df['Dir'])) * N
df['Y_after_N_seconds'] = df['Y'] + df['S'] * np.sin(np.deg2rad(df['Dir'])) * N
```</p>
<p>[Modeling]
・5 fold with stratified kfold
・Binning target and treat as classification</p>
<p>・NN ... CV 0.01234
Structure is here.</p>
<p><code>
x = Concatenate(axis=1)([category_features_embedding, num_features])
x = Dense(150, activation='softplus')(x)
x = Dropout(0.5)(x)
x = Dense(100, activation='softplus')(x)
x = Dropout(0.25)(x)
predictions = Dense(num_classes, activation='softmax')(x) <br>
</code></p>
<p>・Lightgbm ... CV 0.0125
Use conservative parameters.
ex 'num_leaves': 10, 'max_depth': 3, 'min_data_in_leaf': 150, 'max_bin': 64</p>
<p>[Postprocess]
This give +0.00005 on LB.
For example, if yardline is 30, possible gain yards within -30 to 70</p>
<p>```
max_yard = 99 + 70
min_yard = 99 + -30</p>
<p>pred[max_yard-1] += pred[max_yard:].sum()
pred[max_yard:] = 0</p>
<p>pred[min_yard+1] += pred[:min_yard].sum()
pred[:min_yard] = 0
```</p>
<p>[Final submission]
Averaging 2 NN and 1 lightgbm.</p>
<p>[Doesn't work]
・Predicting outliers
I think predicting long Yards exactly will give big jump.
But I couldn't succeed it.</p>
<p>・Using A of season 2017
This cause overfitting with me.
How can I deal with A of 2017...?</p> |
ASHRAE - Great Energy Predictor III | First Silver Journey | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, Kaggle and ASHRAE for hosting this competition. Maybe, my place isn’t high enough to post my view, and in later competitions I won’t post until top 20-30, but that was my second participation and first silver! So, I have an overwhelming excitement and just have to share my experience with such brilliant minds.</p>
<h3>Key points:</h3>
<p>• CV and final model
• Clearing dataset
• Feature engineering
• Sadly, didn’t try to postprocess</p>
<h3>CV and final model</h3>
<p>For feature selection and engineering in CV I only used leaked data with sites 0,1,2,15. 2 folds (split by months) and compare predictions of 2017-2018 with real readings from leak data.
With right parameters it fast and you can actually see if feature useful or not. Yes, you can overfit to this sites, but I created only general features (at least tried), checked performance a few times on public LB and decided to use it.</p>
<p>For final model I excluded sites 0,1,2,15 and trained without them with same 2 fold strategy. It improved private score from 1.266 to 1.262.
I wish I had time to add CatBoost but this time I used only 3 lightgbm models with different seeds and same parameters + blend them with 3 top public kernels without leaks with little different strategy from one another to add diversity. </p>
<p>o <a href="https://www.kaggle.com/roydatascience/ashrae-energy-prediction-using-stratified-kfold">Ashrae Energy Prediction using Stratified KFold</a>
o <a href="https://www.kaggle.com/mimoudata/ashrae-2-lightgbm-without-leak-data">ASHRAE 2*lightGBM without Leak Data</a>
o <a href="https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks">Ashrae: simple data cleanup (LB 1.08 no leaks)</a></p>
<p>Blend weights: 0.66 for my 3 models, rest for public kernels. </p>
<p>My best LGBM parameters:</p>
<p><code>
{'num_leaves': 500, <br>
'objective': 'regression',
'learning_rate': 0.04,
'boosting': 'gbdt',
'subsample': 0.4,
'feature_fraction': 0.7,
'n_jobs': -1,
'seed': 19,
'metric': 'rmse'}
</code></p>
<h3>Clearing dataset</h3>
<p>Used <a href="https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks">Robert Stockton</a> kernel, learned a lot, thanks!
Manually cleaned some worst readings by meter type, by RMSLE error, buildings in sites 9,6,10.
I don’t think, that I should have done some of the manual cleaning based only on my intuition instead of actual understanding what those outliers mean. Maybe, I cleaned what seemed like an outlier to me but it wasn’t and presented in test data. I think so, because I have 2 models which scores higher than my final submission and best one is single model without late manual cleaning that scores 1.25 on private.</p>
<h3>Feature engineering</h3>
<p>I manually checked all existed features, features from kernels, features from my ideas, aggregations, frequency encodings and found only few of them useful. I deleted most of the initial features.
I didn’t do any imputations (score decreased in CV), converted all categorical values to category type, square feet and meter readings to log1p.
I even downloaded external weather history to each site with weather conditions etc. but it didn’t improve score of my CV so I decided not to use them. </p>
<hr>
<p>I had lot of fun struggle through and learned a lot. Hope, you are having a wonderful day and looking forward to see all of you in the next competition!</p> |
NFL Big Data Bowl | Public LB 9th Place solution highlights | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Pitch control</h1>
<p>Using <a href="/statsbymichaellopez">@statsbymichaellopez</a> VIP hint helped a lot. <a href="http://www.lukebornn.com/papers/fernandez_ssac_2018.pdf">Here</a> is the paper.</p>
<p>To implement it, I used <a href="https://www.kaggle.com/pednt9/vip-hint-coded">this</a> good kernel (thank you <a href="/pednt9">@pednt9</a>) and made it run under 10ms per play for 224 points in front of rusher.</p>
<p>I did not keep track of all improvements on LB, but to be safe, this gave me between 0.00010 and 0.00030 boost.</p>
<h1>Tabular model</h1>
<p>My model is a derivation of the standard TabularModel from Fastai with several submodules as described here:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2949020%2F0bc38ace700a464e95f5638e12933ed6%2FUntitled%20Diagram%20(1" alt="">.svg?generation=1574941608426908&alt=media)</p>
<h1>Loss</h1>
<p>My loss activation is <a href="https://pytorch.org/docs/stable/nn.html#softplus">softplus</a>, then I normalize the output so it sums to 1 and backpropagate CRPS.</p>
<p><code>
inps = softplus(inps)
inps = (inps / inps.sum(1).unsqueeze(-1)).cumsum(1)
return (inps - targ).pow(2).mean()
</code></p>
<h1>Rusher heading north-east</h1>
<p>A lot of competitors flipped plays so they all happen left to right. I also flipped them so all rusher are heading with increasing Y. The latter gave me a 0.00010 LB boost.</p>
<h1>Biggest regret</h1>
<p>Not cleaning S nor A as <a href="/wimwim">@wimwim</a> explains in his <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314#latest-683357">solution overview</a>. I guess this is a beginner mistake to not analyse initial features well enough and not catch inconsistencies. Lesson learned !</p> |
Peking University/Baidu - Autonomous Driving | 9th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I thank the host Peking University/Baidu and Kaggle team for holding this attractive competition, and congrats to all prize and medal winners.
Here is a brief summary of my solution (under construction).</p>
<h1>Approach</h1>
<p>I used two independent models. The first model (Model A) detects cars and estimates their orientations. The second model (Model B) estimates depth map of each image. Using image coordinate (ix, iy) from Model A, depth (z) from Model B, and camera intrinsic, 3D coordinate (x, y, z) of each car is calculated. Estimating accurate depth is harder than car detection or orientation estimation. Thus I separated depth part to different dedicated model (Model B).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F745525%2F0f0e8433bfc2d1baf24b657833a5ac6d%2Fpku.png?generation=1579657768348051&alt=media" alt=""></p>
<h1>Model A</h1>
<p>Similar to CenterNet, but there are some modifications in targets and losses.</p>
<h2>Targets</h2>
<p>Model A detects cars, estimates their image coordinate (ix, iy)(not 3D camera coordinate (x, y, z) required for submission), and (yaw, pitch, roll).
Therefore, the targets are (conf, dx, dy, sin(pitch), cos(pitch), yaw, roll).
conf is confidence map for detecting car centers. (dx, dy) is relative position of car center in feature grid (0-1).
(yaw, pitch, roll) is local orientation, not the orientation in camera coordinate system that is given as groudtruth. I used local orientation because original ground-truth orientation is hard to estimate from car appearance without context (camera position and image coordinate).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F745525%2Fa34f2f2cd8c17f49c5c89b2db92bd5c2%2F2020-01-22%2010.32.44.png?generation=1579657792448454&alt=media" alt=""></p>
<p>[1] A. Mousavian et al., "3D Bounding Box Estimation Using Deep Learning and Geometry," in Proc. of CVPR, 2017.</p>
<p>This modification of orientation is done by rotation matrix that moves camera center to target car center:</p>
<p><code>
yaw = 0
pitch = -np.arctan(x / z)
roll = np.arctan(y / z)
r = Rotation.from_euler("xyz", (roll, pitch, yaw))
</code></p>
<h2>Losses</h2>
<p>Cross entropy is used for conf, and L2 loss is used for the other targets.</p>
<h2>Architectures</h2>
<p>I separately trained different models as Model A (four models) to detect different sizes of cars by grouping cars according to their depth; 0-25, 20-50, 40-80, and 70-180. For the former two groups (closer cars), efficientnet or se-resnext is used to get x32 downsampled feature map. For the latter two groups, segmentation models (efficientnet or se-resnext + FPN) are used from segmentation_models.pytorch to get finer feature maps (x16).</p>
<h2>Augmentations</h2>
<p>Augmentation is difficult part; it is related to how ground-truth of zoomed or flipped test images is created.</p>
<ul>
<li>flip (around principal point instead of image center: img[:, :3374] = cv2.flip(img[:, :3374], 1))</li>
<li>rotation (around principal point; img = np.array(Image.fromarray(img).rotate(theta, center=(1686.2379, 1354.9849))) (-7 to 1 degrees)</li>
<li>RandomBrightnessContrast</li>
<li>Random scaling and crop</li>
</ul>
<h2>Optimizers</h2>
<p>Trained for 160 epochs with Adam; LR = 0.0001 and decreased by 0.1 at epoch 100 and 140.</p>
<h2>Ensemble</h2>
<p>Several k-fold models are integrated at feature map level (model raw outputs are averaged), but it seems to not work (why...?)</p>
<h1>Model B</h1>
<h2>Input</h2>
<p>For Model B, fixed area of input images are used for training and test: img[1558:, 23:3351]. Also, relative image coordinates from principal point is added to input in order to exploit the context of fixed camera position against ground (thus, input is gray image, dx, dy).</p>
<h2>Targets and Losses</h2>
<p>The second model predicts only the depth of cars. Actually, I trained Model B to predict (x, y, z) but used only z information.
Thus, target is 3D car coordinate (x, y, z). The loss is calculated only from the pixels of feature map that car centers exist.
Loss function used here is MSE normalized by the distance from camera: ||pred - gt||_2 / ||gt||_2.
This selection comes from my assumption that the evaluation criterion about translation is relative rather than meter.</p>
<h2>Architectures</h2>
<p>Segmentation models (efficientnet or se-resnext + FPN) are used get feature maps (x16).</p>
<h2>Augmentations</h2>
<ul>
<li>flip (around principal point instead of image center: img[:, :3374] = cv2.flip(img[:, :3374], 1))</li>
<li>RandomBrightnessContrast</li>
</ul>
<h2>Optimizers</h2>
<p>Trained for 100 epochs with Adam; LR = 0.0001 and decreased by 0.1 at epoch 70.</p>
<h1>Questions</h1>
<p>After finishing the competition, several questions are still left to the participants. I really appreciate if the organizers answer to the following questions.</p>
<ul>
<li>How was ground-truth for flipped test image created? I guess simply done by x = -x, roll = -roll, pitch = -pitch and this is not accurate as discussed in <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/123653">https://www.kaggle.com/c/pku-autonomous-driving/discussion/123653</a></li>
<li>How was ground-truth for zoomed test image created? I guess it is done by z = z / scale_factor. The other option is leave the ground truth as it is but I think this is less appropriate as we do not know new camera intrinsic.</li>
<li>What was evaluation metric? Discussed in <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/124489">https://www.kaggle.com/c/pku-autonomous-driving/discussion/124489</a></li>
</ul> |
RSNA Intracranial Hemorrhage Detection | Tricks to boost from 0.66 to 0.49 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>trick1: If you sort by [patient ID, 'imageposition2'] you will find the lable is continue
trick2: patientID have an overlap in stage1 (not work at stage2)</p>
<p>You can take this into feature enginnering.
For example, patientID with lable encoder, agg groupy, count encoding and lable encoding.
You can extract some time series feature like lag, diff, next/last, etc..</p>
<p>All you need is a LightGBM/XGboost/Catboost or do some postprocessing.
I perform a stacking with these features. It works well. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F983147%2Fd33b7efc54601e16ab18433c245b8c11%2Fleak.png?generation=1573702109414798&alt=media" alt=""></p>
<p>PS: I think seutao's sequence model is the best solution. All roads lead to Rome.</p> |
Understanding Clouds from Satellite Images | Finally GM & 1st time won prize money! And 3rd place solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>UPDATE: code is now available <a href="https://github.com/naivelamb/kaggle-cloud-organization">here</a>.</p>
<p>Thanks for Max Planck Institute for Meteorology and Kaggle for hosting such an interesting competition. Congrats to all the winners.</p>
<p>The key in my solution is training two segmentation models: <strong>seg1</strong> trained on all data with BCE loss, and <strong>seg2</strong> trained on non-empty images only with soft DICE loss. I think it works because this competition basically has two tasks: 1) detect the empty images; 2) predict accurate masks for the non-empty images. The two segmentation models address these two tasks respectively. </p>
<h2>How I come up with this.</h2>
<p>I started the competition with resnet34-FPN using BCE loss (<strong>seg1</strong>). This model achieves ~0.608 on LB and the major contribution comes from capturing the empty mask: it captures ~80% empty masks. I tried a lot to improve the non-empty part, like using combo loss of BCE and DICE, but it is hard to improve the neg-dice (dice score for the empty masks) and pos-dice (dice score for the non-empty makes) simultaneously.</p>
<p>To predict the non-empty mask accurately, I decided to train 4 individual segmentation models for the non-empty images and then ensemble them together. Since all the train images are non-empty, we can use soft DICE loss directly and the model would focus on predicting accurate masks. I used exactly the same network structure, resnet34-FPN (<strong>seg2</strong>). Then I simply replace all the non-empty predictions from <strong>seg1</strong> model using the predictions from ‘seg2’. Only 1 fold of this 2-stage segmentation pipeline, no TTA, no min-size remover, no classifier, no threshold adjustment (all 0.5) could achieve LB 0.652. After including a resnet34 classifier (0.5 threshold), I got LB 0.655. </p>
<p>Later on, I managed to train all 4 classes in one model by implementing pos-only soft DICE loss. The code looks like:</p>
<p><code>python
def dice_only_pos(logits, labels, labels_fc):
# logits -&gt; pixel level predictions
# labels -&gt; pixel level labels
# labels_fc -&gt; image/channel level labels
pos_idx = (labels_fc &gt; 0.5)
neg_idx = (labels_fc &lt; 0.5)
loss = SoftDiceLoss()(logits[pos_idx], labels[pos_idx])
return loss
</code>
This loss only counts the non-empty channels and ignores all the empty channels.</p>
<p>In summary the pipeline looks like:
>1. <strong>seg1</strong>: a multi-label segmentation model trained with BCE loss
>2. <strong>seg2</strong>: a multi-label segmentation model trained with pos-only soft DICE loss
>3. <strong>cls</strong>: a multi-label classifier trained with BCE loss. </p>
<p>The final submission is achieved by the following steps:
>1. Get predictions using <strong>seg1</strong>
>2. Replacing the non-empty masks from <strong>seg1</strong> by predictions from <strong>seg2</strong>
>3. Removing more empty masks using <strong>cls</strong></p>
<p>Both pixel-level (segmentation) and image-level (classifier) thresholds are 0.5. </p>
<h2>Baseline results for the 2-stage segmentation</h2>
<p>Model summary:
>Network: Resnet34-FPN
>Image size: 384x576
>Batch size: 16
>Optimizer: Adam
>Scheduler: reduceLR for seg1, warmRestart for seg2.
>Augmentations: H/V flip, ShiftScalerRotate and GridDistortion
>TTA: raw, Horizontal Flip, Vertical Flip</p>
<p>Results:
>1-fold: 0.664
>5-fold + TTA3: 0.669
>5-fold + TTA3 + classifier: 0.670. </p>
<p><em>TTA1 means only raw images; TTA3 means raw + H/V flip.</em></p>
<p>The rest of my work is just trying different backbones to find the best one. My final models are:</p>
<p>>seg1: densenet121-FPN, TTA1
>seg2: b7-FPN, TTA3
>cls: b1, TTA1</p>
<p>Results:
>1-fold LB: 0.673
>5-fold LB: 0.6788</p>
<h2>Ensemble</h2>
<p>I ensembled multiple seg2 models using major vote. By including 4 models (b5-Unet, InceptionResnetV2-FPN, b7-FPN and b7-Unet), I achieved 0.6792 on LB. </p>
<h2>Pseudo Labeling</h2>
<p>I selected the pseudo labels based a LB 0.6790 submission with the following rules:
>1. Empty channels with classifier prediction < 0.3
>2. Non-empty channels with classifier prediction > 0.7</p>
<p>An image is selected when all the 4 channels satisfy one of the conditions. 835 images are selected. I retrained the b7-FPN and b1-classifier including the pseudo labeling samples, and the final models are:
>seg1: densenet121-FPN, TTA1
>seg2: b5-Unet + InceptionResnetV2-FPN + b7-Unet + b7-FPN + b7-FPN-PL, TTA3
>cls: b1-PL, TTA3</p>
<p><em>PL means the model is retrained with pseudo labels</em></p>
<p>This model achieves 0.6794 LB. </p>
<p>On the last day, I decide to optimize the classifier threshold channel wise to achieve the best local CV, which gives me 0.6805 LB. </p>
<h2>Other things worth mentioning</h2>
<ol>
<li>My CV aligns pretty well with the LB. 1-fold CV = LB +- 0.005. 5-fold CV = LB - (0.010 ~ 0.012). This helps a lot during the model development.</li>
<li>Resizing the image before training could significantly reduce the training time. My resnet34-FPN could finish 1 epoch of training and validation in around 1 mins on a 2080Ti. </li>
<li>For <strong>seg1</strong> and <strong>cls</strong>, complicated networks do not work. This is probably due to the noisy labels. For <strong>seg2</strong>, I cannot make seresnext50 and seresnext101 work and I have no idea why. </li>
</ol> |
2019 Data Science Bowl | Solution and some ideas | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congrats to all winners! Was an interesting competition in the end, with not too much signal in the data and a very weirdly behaving metric. This is not a gold medal solution, maybe a few points are still interesting to some of you.</p>
<p>I joined the competition only a bit more than two weeks before end without any expectations. The main reason to jump in, was that I had a few ideas I wanted to check out on data like this using neural networks which is also what I started with.</p>
<p><strong>Neural network</strong></p>
<p>In NFL competition I learned that you can use convolution neural networks really well on non-temporal data by just using kernels with size 1. So my idea was to use it as a form of feature engineering / understanding without me spending too much time on the engineering part myself. </p>
<p>I observed that a few things are important for predicting the success here. The two most important things are using information from previous assessments and previous other activities. So I generated two forms of sequences for each training data: sequence of all previous sessions, and sequence of all previous assessments. Each step in a sequence can then have multiple features, like one-hot-encoded or embedded title. So the assessment sequence could look like: assessment title 1, assessment title 2, etc.</p>
<p>After padding and reshaping, I tried to run LSTM and CNN on top, but quite quickly saw that there is little temporal information, so I just used CNN with kernel size 2 in the end for the session sequence, and kernel size 1 for the assessment sequence. I had as additional input the current assessment title.</p>
<p>After a bit of tuning, this NN scored around 0.520 on public LB without any threshold tuning. I always only did fast submission though and only used the training data externally. And then I probably made a mistake. </p>
<p>I decided to use my evaluation routine (more on that a bit later) on one of the public kernels to test it out. I also was fixed on the idea to use all the data available from test to train on, so I definitely wanted to train my model in kernel. The public kernel then quickly scored 0.557 with me adding my routine and the extra data. So I thought: wow I have a good setup let me try to improve that. So I decided to not port the NN to the kernel, because my code was very memory heavy and I would have needed to work a few days on adjusting it and also check how it works with training on more data. So from this point on, I do not use this NN anylonger, but I still believe it has potential if properly tuned and adjusted. What I also want to mention is that the public kernels had a lot of bugs, and it took me quite some time to find most of them. I think it is way better to start from scratch the next time. It is the first time I started with some public kernels.</p>
<p><strong>CV and thresholds</strong></p>
<p>I believe I came up with a quite nice and robust CV setup including a nice way to optimize thresholds. So what I did was to use stratified group kfold, and final CV is based on the median score of a few thousand truncated samples. I believe a few evaluated their models like that. What I did with thresholds though, was to <strong>optimize the thresholds in a way that they optimize the median score of these truncated samples</strong>. I then used these thresholds to predict the test set. To improve the threshold optimization I initialized the Nelder Mead algorithm with the histogram of the target.</p>
<p><strong>Features and models</strong></p>
<p>As said earlier, I don't think there is a tremendous amount of signal in the data. So I did not spend too much time on FE, even though I believe that some carefully crafted features can help quite a bit. In the end I used event codes, event ids, a few assessment related features and a handful extracted from event data. I tried other things like tfidf on json data etc. without too much success. I train on <strong>all samples</strong>, meaning also those extracted from test data.</p>
<p>I focused on LGB and Catboost in the end. With catboost I explicitely utilize the <code>has_time</code> parameter which is perfect for this competition as it encoded categorical variables based on time information it has, so only using samples before that timepoint. I explicitely added also the <code>assessment_id</code> as categorical variable as local tests suggested that it will help me on private LB if I add the previous records to training.</p>
<p><strong>Blending</strong></p>
<p>I tried to be as robust as possible, so I decided on the following schema. 15 times 10 fold, for each of those 15 bags fit catboost and lgb, blend them with rank average using catboost 25% and lgb 75%. Optimize truncated thresholds as above. Predict test based also on rank avg mean of all 10 folds and then finally do majority voting on all 15 bags. I am quite happy with that because I managed to pick one of my best private LB scores in the end.</p>
<p><strong>Crazy ideas that did not make it</strong></p>
<p>I had two "crazy" ideas. I think the first one is quite simple and should have worked, but I had a bug on the kernel at the last day and didn't select it even though it scored highest on private LB even with this bug. The idea is quite simple, fit a MinMaxScaler or QuantileTransformer on the test predictions, transform oof and test predictions with it. Then do the threshold optimization on oof, and then apply it to test. This brings them on a similar range and scale and has some benefits over ranking the predictions. In nearly all my local experiments with simulating test data, this improved the QWK on the test data. I think this can bring a few points, but I have to test it again.</p>
<p>The second idea involves again CNNs. The idea is to use CNNs to find the optimal thresholds for a sample. My idea was to do repeated subsampling of predictions on oof (can also be truncated) and calculate the optimal thresholds for these subsamples. The predictions of these subsamples are then the training data, and the thresholds the targets to predict. But again, you have no order on the training data but rather a set of predictions. So you can take a 1 kernel CNN and fit it on the set in order to predict the three thresholds optimizing something like MSE. Evaluation is then how close predicted thresholds are to the optimal thresholds. This actually also worked really well locally, but again I did not have time to tune it properly and port it to kernels. But might be worth a shot to test this further.</p>
<p>I tried really hard to find a good gamble the last few days using local test simulations, but I just could not find any. Ideas like sampling according to the test distribution etc. </p>
<p>This competition again is a good example of why you should not chase the public LB. I was also tempted to do it after jumping to position 50 or so, but relying on a robust CV setup simulating how test data looks like is usually the better idea.</p>
<p>Kernel: <a href="https://www.kaggle.com/philippsinger/lgb-catboost-17-10fold-15bag-3k-majority-rank-v4">https://www.kaggle.com/philippsinger/lgb-catboost-17-10fold-15bag-3k-majority-rank-v4</a>?</p> |
2019 Data Science Bowl | 4th private (7th public) place writeup, link to code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks Booz Allen Hamilton for the great competitions and all participants for interesting ideas and discussions!</p>
<p>We jumped from 7 place public LB on 4th place on private. Best submit we selected is 0.561 private, 0.572 on public LB. It was a blend of neural networks. The second one was a 3-level stack 0.560 private LB, 0.566 public LB.</p>
<h1>Some ideas first</h1>
<ol>
<li>The test set can be labeled with the same procedure as a train one and can be used in training.</li>
<li>TfIdf on sequence of events. Each event_id can be coded as title + event_code + correct_flag + incorrect_flag. After that we can look at installation_id history as a sequence of tokens and train tfidf on them. For training procedure we apply transformer on actual sequence of tokens before the current assessment.</li>
<li>Some clips and other titles are very important for estimation of accuracy on the assessment. Maybe order of them is not so important, but RNN architecture can handle their presence in a user history good enough.</li>
<li>We have a small amount of data and stability is much more important than the actual score. If changing column order makes score worse than we are doing something wrong.</li>
</ol>
<h1>Models</h1>
<h2>Neural network</h2>
<p>Tfidf features + RNN on title sequence (last 64) with some additional features:
1. Embedding of title dim=7.
2. Number of correct attempts during the title.
3. Number of incorrect attempts during the title.
4. Ratio of 2 and 3.
5. Log of time in seconds between starts of the titles.
6. Number of correct attempts in the previous title like that.
7. Number of incorrect attempts in the previous title like that.
8. Ratio of 7 and 6.
(9) I've tried a lot to include counters as is in the model. Finally, I gave up, but those models peformed better on the private LB.
Most of single networks were on 0.56+ zone. The most interesting one for me has equal public and private scores:
<a href="https://www.kaggle.com/sergeifironov/bowl-stabilize-coefs-cntrs-all5">https://www.kaggle.com/sergeifironov/bowl-stabilize-coefs-cntrs-all5</a></p>
<h2>Tree based models</h2>
<p>Lightgbm,Xgb, Catboost. (will be soon)</p>
<h1>Stack</h1>
<p>0 level) NN folds in folds model (5 outer folds, 5 inner folds), lgbm, catboost.
1st level) MLP, Lightgbm.
2nd level) Ridge.
<a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/127312">https://www.kaggle.com/c/data-science-bowl-2019/discussion/127312</a></p>
<h1>Validation</h1>
<p>I wrote a lot in this topic: <a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/125001">https://www.kaggle.com/c/data-science-bowl-2019/discussion/125001</a>, but near the end of the competition I gave up to make it correlated with public LB and use the very simple one without installation id groups at all.</p>
<h1>What doesn’t work for us</h1>
<ol>
<li>Transformers, GPT-2 and BERT vectors trained on predict event_id, title, title+accuracy_group and so on. They are useless.</li>
<li>Graph NN. </li>
<li>Transformer on a sequence of events. It’s too fat for this small amount of data.</li>
</ol> |
2019 Data Science Bowl | 30th Place Write Up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'd like to share what I learned a lot from this competition with you.</p>
<h1>Our Approach(public 622th → private 30th)</h1>
<ul>
<li>Trust CV and LB.</li>
<li>LGB with about 500 features.</li>
<li>Some features were dropped by adversarial score(0.65 or so).</li>
<li>Train by regression and then optimize with nelder-mead.</li>
</ul>
<h1>Late Submission</h1>
<p>I realized adversarial validation was useless, and "Trust CV" is the best approach here after some late submissions.
Also, if I used about 2k features, maybe I could get prize as well as gold medal and make my team mates Grandmasters.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2Fd985dcd0bb2969058ad6d94d859c32b4%2Fimage%20(3" alt="">.png?generation=1580450134030141&alt=media)
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2F310636e14f81322a1f7407b657cfe993%2Fimage%20(4" alt="">.png?generation=1580450231030144&alt=media)</p>
<h1>How to get gold medal</h1>
<ul>
<li>Generate about 30k features.</li>
<li>Use same condition as evaluation for validation.</li>
<li>Trust only CV(after confirmed if we can trust LB).
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2F378c588fc6bf87651928ffdf2c346a70%2FQWK_random_truncate.png?generation=1580452115832570&alt=media" alt=""></li>
<li>Use a lot of features till CV saturated.</li>
<li>Don't worry about Adversarial Validation after all.</li>
</ul> |
2019 Data Science Bowl | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Booz Allen Hamilton, Kaggle and everyone for this wonderful competition. And also thanks to my teammate <a href="/oyxuan">@oyxuan</a> . Congratulations to all winners!</p>
<h1>1. Summary</h1>
<p>Our selected score is based on a single lightgbm (average on multi-seeds 5 fold). </p>
<p>The model score :
private qwk 0.568, public qwk 0.563
cv weighted qwk 0.591, cv weighted rmse 1.009</p>
<h1>2. Validation Strategy</h1>
<p>In the early game , we find that the LB score is unstable and has low correlation with the local cv, so we decide to focus on the local cv score only. We have tried several ideas to make the local cv stable. Below are two validation sets we use:</p>
<p><strong>2.1 GroupK CV</strong> : We use the 5 times * 5-fold GroupK by installation_id, each time with random groupk split seed and random column order. However qwk is still not so stable on our local cv, so we mainly concern the weighted rmse when validating our ideas and ignore qwk. For the weighted loss , the weight is the sample prob for each sample (We use full data, for the test part, we calculate the expectation of the sample prob as weight). </p>
<p><strong>2.2 Nested CV</strong>: Usually, the GroupK cv above works well. When we think the GroupK cv's decision has low confidence (eg. inconsistent with our common sense), we will use another nested set for double check: We simulate the train-test split on the local data : random select 1400 users with full history for the nested training and 2200 users with truncated history for the nested testing. We repeat it for 50~100 times and calculate the mean score for validation.</p>
<h1>3. Feature Engineering</h1>
<p>Most of our time are spending on feature engineer. We generate around 20,000 features these days, and use the <a href="https://www.kaggle.com/ogrellier/feature-selection-with-null-importances">null importance method</a> to select the top 500 features. </p>
<ol>
<li><p>Lots of stats (mean/sum/last/std/max/slope) from true attempts ratio, correct true ratio, correct feedback ratio etc. Stats based on same assessment or similar game are highest important (Similar game : we map each game to the corresponding similar assessment, since they are similar task)</p></li>
<li><p>We extract features from different parts of the child history data : 1) full history part, 2) last 5/12/48 hour, 3) from last assessment to the current assessment. Since here are some shared devices phenomenon, add different part info may help model.</p></li>
<li><p>Event interval features (next event timestamps - current event timestamps) : Stats (mean/last) of event interval groupby event_id / event_code. Several event interval features show high importance.</p></li>
<li><p>Video skip prop ratio : clip event interval / clip length provided by organizer. (Does the child skip the video? If so, when does he skip?)</p></li>
<li><p>Event data feature : Stats(mean/sum/last) of all numerical args in event data X event_id / event code combination. We get the combination and args type from the specs file.
eg. <code>event_code2030_misses_mean</code>.</p></li>
</ol>
<h1>4. Feature selection</h1>
<ol>
<li>Drop duplicate cols</li>
<li>Truncated adversarial validation to make sure there is no leak and no code errors, the mean adversarial AUC should be around 0.5.</li>
<li>Use null important method to select top 500 features.</li>
</ol>
<h1>5. Model</h1>
<ol>
<li><strong>Data augmentation</strong> : The model is trained on the full data (full train history and test previous, improve + 0.002). </li>
<li><strong>Loss</strong> : We use rmse loss for training, and weighted rmse loss for validate. </li>
<li><strong>Threshold</strong> : Then use <a href="https://www.kaggle.com/naveenasaithambi/optimizedrounder-improved">Opitmizer Rounder</a> to optimize thresholds for weighted qwk.</li>
<li><strong>Ensemble</strong> : We just try a simple blending method (0.8 * lightgbm + 0.2 * catboost, the private score is 0.570. Since the cv score is not improved, we do not select it for our final results.</li>
</ol>
<h1>Thanks for reading!</h1> |
2019 Data Science Bowl | Lessons learned from a 95 position drop (Public 11th place)... | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to my teammates @applehph, @fergusoci, @m7catsue, and @mahluo for all of their hard work over the past month. We had high hopes finishing the competition in gold contention with our public LB position of 11th place, however it didn't work out that way in the end. We still ended it with a silver medal, so I'm still very proud of how our team performed!</p>
<p>Like all of these competitions, you come away with something new to remember and apply to future competitions. For our team, I feel there are two distinct lessons that we learned: </p>
<p><strong>1) Never underestimate ensembling or stacking diverse models</strong>
Our final two submissions we chose were single LGB models. We made that decision mostly because we never saw good effects in our CV and LB when ensembling a variety of models (or even multiple different LGB models). However, the ensemble solutions we didn't pick actually turned out to perform best on the private dataset. This points to an important truth of simply relying on good practices. And ensembling is a good practice.</p>
<p><strong>2) A correlated CV and LB is crucial before any more work should be done</strong>
That leads me to point number 2. It is very difficult to make good decisions on features and best model parameters if you don't have a correlated LB and CV. We were unable to make a good decision on ensembling because we had trouble in this area. We don't need them to be exactly mirror images of each other, but when CV goes up we should expect to see LB go up. And if they don't, then figure it out...or rely on CV! </p>
<p>Anyways, we learned a lot and I'm excited to bring this knowledge into the next competition. Thank you Kaggle, congrats to everyone, and job well done!</p> |
2019 Data Science Bowl | 14th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I would like to thank the host, Kaggle, and everyone for this great competition. I would like to share my deepest gratitude to my teammates <a href="/alijs1">@alijs1</a> <a href="/johnpateha">@johnpateha</a> <a href="/kazanova">@kazanova</a> . It was a really good time and joyful period for me. </p>
<p>I will briefly brief our solution, and if I miss something, my teammates can add in their perspective.</p>
<h2><strong>FEATURE ENGINEERING</strong></h2>
<p>Besides some popular features available in public kernels, we have some more custom features, such as ratio of good actions / all actions in Activity sessions, ratio of misclick or misdrag action / count actions (or session duration), count of some specific event codes since the previous Assessment session. The most interesting features, would be stats on the same assessment with regards to each data sample. This class of features helps tree models to converge quicker and reduce the importance of Assessment title.
As a separate solution, Marios (<a href="/kazanova">@kazanova</a>) crafted his own feature set which took into consideration the train/test mismatch. He can share more about this if needed.</p>
<h2><strong>DATA AUGMENTATION</strong></h2>
<p>It is surprising that we did not realize the possibility to augment train data by a lot of test samples with true labels. Whenever a test installation_id (hereinafter referred to as “id”) has more than 0 prior assessment, we can trim the user history and make extra training samples. This augmentation helped us a lot in the blend.</p>
<h2><strong>MODELLING</strong></h2>
<ul>
<li><strong>Model 1</strong>: Main Model for all assessments</li>
<li><strong>Model 2</strong>: Five separate models for each assessment, then concat result.</li>
</ul>
<p>Besides using all original data (17690 rows) as the main base model, we noticed that training 5 separate submodels for each type can give a boost if properly blended with the main base model. And since we also have the option of data augmentation, it results in 4 different training strategies in general. </p>
<ul>
<li><p><strong>Find threshold:</strong> We use a simple optimizer to find threshold based on pure CV. People care too much about searching for right threshold, but we don’t. Instead we care more about modelling and ensembling, so thresholding would cast less effect.</p></li>
<li><p><strong>Train with sample weights</strong>
We saw a significant LB boost if using appropriate sample weights in training. CV might not be boosted a lot, but LB is. We use the number of prior assessment as a criterion for assigning weight for each sample. The problem here is that we cannot naively use histograms of this criterion between train/test to calculate weights. The issue is that samples from the same id are much similar, so the effect of each individual sample in a single id should drop. For instance, if the ratio of 0-prior-assessment samples in train data is 1/4, and the ratio of 0-prior-assessment samples in test data is 1/2, then we cannot just simply assign weight=2 for all 0-prior-assessment samples in train, but a smaller value. In the end, we did not come up with a theoretically concrete strategy on how to get the weights, but just to roughly estimate it. We chose [1.65 , 1.09, 0.87, 0.77, 0.57, 0.47] as the weight for samples with 0-prior, 1-prior, 2-prior, 3-prior, 4-prior, and more-than-4-prior assessments, respectively.</p></li>
</ul>
<h2><strong>ENSEMBLING</strong></h2>
<ul>
<li><strong>Blend by Classifier Logic</strong>
We found a nice way to combine the main model result with 5-submodel result. We trained 3 simple classifiers with AUC loss: A) classify between class 0/1, B) between class 1/2, and C) between class 2/3. Then we use the following custom logic to combine 2 float predictions of model 1 and model 2 to class label: </li>
</ul>
<p>| Abs(model1_int – model2_int) | model1_int | model2_int | Classifier | Result |
| --- | --- | --- | --- | --- |
| 0 | | | | model1_int |
|1| 0 (or 1)| 1 (or 0)| A >= 0.2|1|
|1| 0 (or 1)| 1 (or 0)| A < 0.2 |0|
|1| 1 (or 2) |2 (or 1) |B >= 0.5| 2|
|1 |1 (or 2) |2 (or 1) |B < 0.5 |1|
|1| 2 (or 3)| 3 (or 2)| C >= 0.85| 3|
|1 |2 (or 3)| 3 (or 2)| C < 0.85 |2|
|> 1 | | | |(model1_int + model2_int) / 2|</p>
<ul>
<li><strong>Stack</strong>
Stacking also worked for us, both in CV and LB. As a result, we chose 1 final submission for the classifier logic, and the other 1 for stacking. For stacking, we tried 2 approaches: 4 stackers average, and extra-tree regressor. The latter performed better in CV and private LB, but we did not choose it and instead chose the blend of classifier logic + stack, which is bad in private LB.</li>
</ul>
<h2><strong>WHAT WORKED IN PRIVATE LB BUT NOT PUBLIC LB</strong></h2>
<ul>
<li>Blend by histogram matching (use prediction histogram of the best public LB submission to rectify private test predictions): very bad public LB, but very good private LB. </li>
<li>Extra Trees Regressor Stacking.
We would have finished "In The Money" zone if we chose this submission. However we don't regret.</li>
</ul>
<h2><strong>WHAT DID NOT WORK</strong></h2>
<ul>
<li>Ranking average the predictions.</li>
<li>Pseudo label from unused train ids. Indeed, we observed high CV boost when using pseudo samples from unused train ids, but LB decreased. Instead we doubt we did not do it properly enough due to our code’s complexity.</li>
</ul>
<h2><strong>WHAT WE DID NOT FINISH IN TIME</strong></h2>
<ul>
<li>We also developed an RNN model, which has CV 0.53x. This RNN takes as input two kinds of features: 1) the sequence of sessions as sequential data. Each session’s features are just count of different event codes. And 2) dense features which is same as those in LGB modelling. Indeed, this model can contribute in the blending, but we only finished this in the last day, so it was hard to combine into the code. We believe it would boost our score significantly.</li>
</ul>
<h2><strong>WHAT DISTILLED IN MY MEMORY</strong></h2>
<ul>
<li>My first time to work with 3 great grandmasters in a big competition. It is my pleasure and great opportunity to learn from all of my teammates. Thanks a lot guys.</li>
<li>I, personally feel happy with this result since we are one of the only 3 teams that can keep gold. Disappointment is overwhelmed by joy of lucky.</li>
<li>We sometimes felt that we a little bit hate kernel 😊 just because it run for 8 hours then failed at the end due to some minor error. However in the end I think it’s a good way for Kaggle competition: people cannot use black magic too much, and a concrete code base is needed, which makes room for coding skills enhancement for competitors. </li>
<li>Combining solutions from team members is not a joke, especially if merging is late, like in our case (Evgeny and Marios only joined in the last week). It needs tons of efforts from all members. But in the end, if diversity of solutions is ensured one can expect a huge leap.</li>
<li>It is good that no extensive public sharing or any scandal appeared during this competition. </li>
<li>Public LB/CV correlation is a mystery, which makes the competition more interesting.</li>
<li>Diversity is important, and is the key factor to avoid shake-up. We tried to bag training a lot, with lots of feature sets and models from each member. </li>
<li>We have no concrete sign (for example, CV and public LB) to select our best private LB, so in general we don't regret the result too much. The gold position is somehow the result of our hard work and general sense of shakeup. So we will enjoy this gold medal a lot!</li>
</ul>
<p>Thanks for reading, and hope you like this write-up. Our kernel is posted here.
<a href="https://www.kaggle.com/khahuras/bowl-2201-a?scriptVersionId=27403894">https://www.kaggle.com/khahuras/bowl-2201-a?scriptVersionId=27403894</a></p> |
Understanding Clouds from Satellite Images | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello to everyone participating in the competition, congratulations to all who won and thanks to kaggle for the excellent competition.</p>
<p>Here I will give a general solution to the problem, I will talk about techniques that helped and those ideas that did not work.</p>
<p>Most recently, I participated in kaggle segmentation contests
<a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation">SIIM-ACR Pneumothorax Segmentation</a> and <a href="https://www.kaggle.com/c/severstal-steel-defect-detection">Severstal: Steel Defect Detection</a>
Therefore, I have gained decent experience in solving such problems. I already had an idea of what could work and what couldn’t.</p>
<hr>
<h3>Idea #1</h3>
<p>Looking at the data, I saw that the images have a dimension of 1400x2100 and it was not a good idea to put such data to the network directly. Of course, it was possible to resize the image to 2 or 4 times, but obviously, we will definitely lose something from the data.
I came up with a compromise. Use a small network - a compressor, that extracts significant features from the data and reduces the image size.
It looks something like this:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F505196%2F953479a06dbb81adbf16320caadf44e7%2Fcompressor.png?generation=1574249239505220&alt=media" alt=""></p>
<p>To build models, i used Keras 2, Tensorflow 1.4 and the library <a href="https://github.com/qubvel/segmentation_models">https://github.com/qubvel/segmentation_models</a> (thank you very much Pavel Yakubovskiy)</p>
<hr>
<h3>Idea #2</h3>
<p>In order to build an effective ensemble, we must use models with the least possible correlation between predictions. I decided to use such combinations of model parameters:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F505196%2Fe0d9fd528dc14968070094b3bd2dd595%2Fmodels_grid.png?generation=1574249329683540&alt=media" alt=""></p>
<p>All models had a Unet decoder.</p>
<hr>
<p><strong>training parameters:</strong>
Optimizer: Adam
Loss Function: FocalLoss
Batch Size: 4</p>
<p>Hard albumentation:
Hflip, VFlip, Equalize, CLAHE, RandomBrightnessContrast, RandomGamma, Cutout
ShiftScaleRotate, GridDistortion, GaussNoise</p>
<p>30 epochs on a two-cycle learning profile. It looks something like this:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F505196%2F2f8acf3439d405a4d13cde0fa28d6378%2FLearning%20profile.png?generation=1574249719812217&alt=media" alt=""></p>
<p>For training models, I used 2xP3.2 Amazon instance</p>
<hr>
<h3>Idea #3</h3>
<p><strong>Postprocessing</strong>.
Mean average all models -> raw probability
All tasks for segmenting objects with a DICE metric are very sensitive to FalsePositive errors. In some cases, training a separate classifier model for detect of a mask in the image very helps. In my case, the classifiers did not help much and I used the Triple rule method, which I first saw in the first place solution about competition <a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation">SIIM-ACR Pneumothorax Segmentation</a>.
Thanks so much for the idea of <strong>Aimoldin Anuar</strong> <a href="https://www.kaggle.com/sneddy">https://www.kaggle.com/sneddy</a>
The description of this approach can be understood from here <a href="https://youtu.be/Wuf0wE3Mrxg">Kaggle SIIM-ACR Pneumothorax Challenge - 1st place solution - Anuar Aimoldin</a></p>
<p>The triple rule parameters (threshold1, minsize, threshold2) were searched by global optimization methods.</p>
<p>Basically, this is all that helped in solving the task.</p>
<p>What didn't work:
- Mask classifiers
- mmdetection / FasterRCNN
- BCE-DICE, lovasz, triple_loss
- Adversarial validation
- Pseudo labeling</p>
<p>>
Thanks for watching</p> |
2019 Data Science Bowl | 15th palce solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We( @yasufuminakama, @currypurin, @hidehisaarai1213 ) would like to thank Booz Allen Hamilton for the very interesting competition and to all the participants for giving us a lot of ideas.</p>
<h2>Features</h2>
<h3>Nakama Feature</h3>
<ul>
<li>Nunique features of ['event_id', 'game_session', ...and so on]</li>
<li>Decayed Count features & Ratio features of ['title_event_code', 'title', ...and so on] <br>
Count features decayed by elapsed time from previous assessment and their Ratio features.
Ratio features are better than Count features. <br>
Below is an example of decay function.
<code>python
def decaying_counter(x, days):
return max(0.xx, 1-days/30) * x
</code></li>
<li>Misclicking features <br>
As mentioned <a href="https://www.kaggle.com/zgzjnbzl/visualizing-distraction-and-misclicking">here</a>, event_code 4070 are clicks on invalid places on the screen.
So by using kmeans clustering of 4070 (x, y), we expect cluster as object or something on the screen, then calculating distance from it's own cluster, the distance can represent
"Operation is rougher or unfamiliar as the distance is larger?".</li>
<li>Past assessment features <br>
Statistical features of past assessment of all time & last 7 days for overall & each assessment title. </li>
<li>What didn't work
TfIdf & w2v feature on sequence of titles before assessment. I should've tried more...
<h3>アライさん's features</h3></li>
</ul>
<p>Feature generation code is made public at <a href="https://github.com/koukyo1994/kaggle-dsb2019/blob/master/src/features/past_summary3_decay.py">https://github.com/koukyo1994/kaggle-dsb2019/blob/master/src/features/past_summary3_decay.py</a>.
* Features based on public kernels
Although it worked well, it can also be a cause of overfitting since the distribution of count based features differs between train and test. Therefore, I also applied decaying factor to count them or get average of them, which had already been proved to be effective to avoid overfitting in Y.Nakama's experiment.
Decaying of count features in Y.Nakama's and mine is slightly different, since Y.Nakama applied decaying to every assessments while I applied decaying to every sessions.
Note that some of those features which appeared to be not useful so much based on LightGBM importance or showed high correlation with other features were deleted from feature generation code.
* Past Assessment features
* {mean, var, last} of {success_ratio, n_failure, accuracy_group} of the same assessment type in the past
* time to get success
* {mean, var} of interval of actions (event_code <code>4020</code>, <code>4025</code>)
* Past Game features
* {mean, var, last} of {correct, incorrect} (decayed) count of each Game
* {mean, var, last} of success ratio of each Game
* {mean, var} of interval of actions in each Game
* Past Activity features
Few handcrafted features of some of the activities.</p>
<h2>data augmentation idea</h2>
<p>As we apply decay function to Count features, we could augment data by using different decay functions. The aim is that oblivion curve is different from each people by age or something.</p>
<h2>Feature selection</h2>
<p>Feature selection using LightGBM / CatBoost importance were applied before training. About 80 - 90% of the features were deleted at this step and the resulting number of features are around 400. Feature selection was effective especially to NN model (probably because of high dropout rate in NN model) and bumped up the oof score around 0.005 for GBDT model and 0.01 for NN model.</p>
<h2>Model</h2>
<p>Applying certain transformation to the output of multiclass classification gives us better result
compared to regression. The transformation is as follows.
<code>python
prediction @ np.arange(4) # the format of prediction should be (n_samples, 4)
</code></p>
<h3>Tree based models</h3>
<p>My team tried several objectives. Cross entropy and multiclass worked, used it for the final model.
* Final model used three models
* Lightgbm: cross entropy
* Lightgbm: multiclass
* Catboost: multiclass
* cross entropy
* Divide the target by 3 and convert from 0 to 1, then learn with cross entropy (objective: xentropy). In the final model, this model's weight was the largest.
* multiclass
* In multiclass, after calculating the probabilities of the target class from 0 to 3, the following calculation is performed to make continuous values.
* <code>preds @ np.arange(4)</code></p>
Tree based models didn't work
<ul>
<li>CatBoost
<ul><li>regression, CrossEntropy</li></ul></li>
<li>Lightgbm
<ul><li>regression, multiclassova(One-vs-All)</li></ul></li>
<li>Xgboost
<ul><li>regression, reg:logistic
<h3>NN model</h3></li></ul></li>
</ul>
<p>Our NN model is simple 3 layer MLP. The implementation is <a href="https://github.com/koukyo1994/kaggle-dsb2019/blob/master/src/models/neural_network/model.py">here</a> (<code>DSBOvR</code> is the model we used).
We used training of one-vs-rest fashion, so the output of the model is a (n_batch, 4) shape tensor and each column represents the probability of each class. <code>torch.nn.BCELoss</code> was used for loss function and after getting the output tensor, following transformation is applied to get (pseudo-)regression value.
<code>python
valid_preds = valid_preds / np.repeat(
valid_preds.sum(axis=1), 4).reshape(-1, 4) # normalization
valid_preds = valid_preds @ np.arange(4) / 3 # conversion to get pseudo-regression value
</code>
this pseudo-regression value can be used for threshold optimization. Note that we normalized this value to be in the range of (0.0, 1.0) while training.
Before training, feature selection using LightGBM importance (about 80-90% of the features were deleted), preprocessing (fillna, log transformation for those feature which showed high skewness, feature scaling with <code>StandardScaler</code>) was applied. When training, Adam optimizer is used with CosineAnnealing lr scheduler and for each fold we trained the model 100 epochs. At the end of each epoch we calculate QWK using threshold optimization to pseudo-regression value and saved the weights if the best score is achieved. Final oof and prediction to test data was made with the weights which achived the best QWK score at each fold.
We've also prepared NN only kernel <a href="https://www.kaggle.com/hidehisaarai1213/dsb2019-nn-ovr-reduce-90-val-60-percentile">here</a>.</p>
<h2>validation strategy</h2>
<ul>
<li>validation selected by number of Assessments
If validation is performed using all data, model fits strongly to the data which has many previous assessments and thus easy to predict.
Therefore, the 95% quantile of the distribution of the Assessment number of the test that is truncated is used as a threshold, then removed the data that exceeds the threshold from validation. That one also raised all oof CV.
<h2>Ensemble and QWK threshold</h2></li>
</ul>
<p>Ensemble using all oof is not appropriate to maximize truncated CV.
Therefore, We sampled the training data at the same ratio as when truncate.
In particular, sampling weight is 1/(Assessment Count) for each installation_id.
Blend is performed based on this sampled data. We also tried stacking by Ridge regression, but we don't think there is a big difference from blending.
The threshold is also determined so that the truncated cv of this sampled data is maximized.</p>
<h2>Metric used for validation</h2>
<p>Both public LB score and oof score was not very helpful to judge if a change in our submission is effective or not. Therefore we used truncation to train data to mimic the generation process of test data. This truncation is mostly the same as that shared in common in discussion (select 1 assessment from each installation_id). Since this score is a bit unstable we repeated the sampling & scoring process 1000 times and calulated the mean of the score.</p>
<h2>Final result</h2>
<ol>
<li>truncated score: 0.5818, public score: 0.565, private score: 0.557 (private 15th)</li>
<li>truncated score: 0.5811, public score: 0.574 (public 5th), private score: 0.556</li>
</ol> |
Understanding Clouds from Satellite Images | 5th place solution(single segmentation model private lb 0.66806) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners in this competition! This is my first gold medal. I feel so happy.</p>
<p>My solution is ensemble of 3 segmentation models</p>
<h2>Augmentation</h2>
<p>In this competition I found image augmentation is very important. I have tried many different augmentation sets and finally found one set that works good for me. I use albumentation to do image augmentation.
<code>
aug = Compose([
ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, shift_limit=0.1, p=0.6, border_mode=0),
OneOf([
ElasticTransform(p=0.5, alpha=50, sigma=120 * 0.02, alpha_affine=120 * 0.02),
GridDistortion(p=0.5),
OpticalDistortion(p=0.5, distort_limit=0.4, shift_limit=0.5)
], p=0.8),
RandomRotate90(p=0.5),
Resize(352, 544),
VerticalFlip(p=0.5),
HorizontalFlip(p=0.5),
OneOf([
IAASharpen(alpha=(0.1, 0.3), p=0.5),
CLAHE(p=0.8),
GaussNoise(var_limit=(10.0, 50.0), p=0.5),
#GaussianBlur(blur_limit=3, p=0.5),
ISONoise(color_shift=(0.01, 0.05), intensity=(0.1, 0.5), p=0.3),
], p=0.8),
RandomBrightnessContrast(p=0.8),
RandomGamma(p=0.8)])
</code></p>
<h2>Models</h2>
<p><strong>Model1: **
<code>
Encoder: efficientnet-b1
Decoder: unet
Image Input Size: 416x608
TTA: hflip, vflip, multi-scale: [(352, 544), (384, 576), (448, 640), (480, 672)]
Threshold: threshold label = [0.85, 0.92, 0.85, 0.85], threshold pixel = [0.21, 0.44, 0.4, 0.3]
Score: 9-fold cv = 0.66002, public lb = 0.67070, private lb = 0.66806
</code>
**Model2:</strong>
<code>
Encoder: efficientnet-b3
Decoder: fpn
Image Input Size: 352x544
TTA: hflip, vflip, multi-scale: [(320, 512), (384, 576)]
Threshold: threshold label = [0.85, 0.9, 0.9, 0.85], threshold pixel = [0.35, 0.4, 0.42, 0.42]
Score: 9-fold cv = 0.65646, public lb = 0.66426, private lb = 0.66687
</code>
<strong>Model3:</strong>
<code>
Encoder: resnet50
Decoder: unet
Image Input Size: 352x544
TTA: hflip, vflip, multi-scale: [(320, 512), (384, 576)]
Threshold: threshold label = [0.9, 0.92, 0.87, 0.82], threshold pixel = [0.35, 0.51, 0.31, 0.3]
Score: 9-fold cv = 0.65715, public lb = 0.66541, private lb = 0.65973
</code></p>
<p>All models use bcedice loss and Adam optimizer. Run threshold search to get the threshold label and threshold pixel</p>
<h2>Ensemble</h2>
<p>I use cv and public lb score to roughly set model weights, and run threshold search to get the threshold.
<code>
Model Weight: model1, model2, model3 = [4, 1, 2]
Threshold: threshold label = [0.84, 0.9, 0.85, 0.8], threshold pixel = [0.25, 0.43, 0.35, 0.35]
Score: 9-fold cv = 0.66449, public lb = 0.67601, private lb = 0.67080
</code></p>
<p>Finally thanks to <a href="/hengck23">@hengck23</a> rKeng, I learn a lot from his code and ideas.</p> |
Peking University/Baidu - Autonomous Driving | 2nd Place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>2nd Place for Kaggle_PKU_Baidu</h1>
<p>Firstly congratulations to all top teams.</p>
<p>Secondly I would like to congratulate to all my teammates for this collaborative team work, every member of the team is indispensable in this competition.</p>
<h2>Approach</h2>
<p>The overall pipeline is largely improved on previous method 6D-VNet [1].
We reckon we are the very few teams that didn't use CenterNet as the main network).
The system pipeline is as follows (the red color denotes the modules we added for this task):</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16463%2Fbe74039a5b29b17e735b6f209c0b7e10%2Fsystem_pipeline_kaggle.PNG?generation=1579687638026915&alt=media" alt=""></p>
<p>The three major improvement consists of
(1) better detector and conv backbone structure design;
(2) post-processing with both mask and mesh information both geometrically and via a Neural Mesh Renderer (NMR)[4] method;
(3) novel way of ensembling of multiple models and
weighted average of multiple predictions. </p>
<p>We specify the implementation details as follows:</p>
<h3>Implementation details</h3>
<p>We build our framework based upon the open source project
<a href="https://github.com/open-mmlab/mmdetection%5d">MMDetection</a>. This is an excellent framework that can help us to modulised the code.
Pixel-level transform for image augmentation is called from the <a href="https://github.com/albumentations-team/albumentations">Albumentations</a> library.</p>
<p>The detector is a 3-stage Hybrid Task Cascade (HTC) [2] and the backbone is ImageNet pretrained High-resolution networks (HRNets) [3].
We design two specific task heads for this challenge: one head taking ROIAligh feature for car class classification + quaternion regression and one head taking bounding box information
centre location, height and width) for translation regression.
With this building block, we achieved private/public LB: 0.094/0.102.</p>
<p>We then incorporated the training images from ApolloScape dataset and after cleaning the obviously wrong annotations, this leaves us with 6691 images and ~79,000 cars for training,
The kaggle dataset has around 4000 images for training,
we leave out 400 images randomly as validation. With tito(<a href="/its7171">@its7171</a>) code for evaluation, we obtained ~0.4 mAP.
On public LB, we have only 0.110.
Such discrepancy between local validation and test mAP is a conumdrum that perplexes us until today!</p>
<h3>Postprocessing</h3>
<p>After visual examination, we find out the detector is working well (really well) for bounding box detection and mask segmentation (well, there are 100+ top conference paper doing the research in instance segmentation anyway). But the generated mesh from rotation and translation does not overlap quite well with the mask prediction.
Thus, we treat <code>z</code> as the oracle prediction and amend the value for <code>x</code> and <code>y</code> prediction.
This gives us a generous boost to 0.122/0.128 (from 0.105/0.110). </p>
<h3>Model ensembles</h3>
<p>Model ensemble is a necessity for kaggle top solutions: we train one model that directly regresses translation and one model regresses the <code>sigmoid</code> transformed translation.
The third model is trained with 0.5 flip of the image.</p>
<p>Because of the speciality of the task: the network can output mask and mesh simultaneously, we merge the model by non-maximum suppression using the IoU between the predicted mesh and mask as the confident score.
The <code>max</code> strategy gives 3 model ensemble to 0.133/0.142.
The <code>average weighting</code> strategy generates even better result and is the final strategy we adopted.</p>
<p>The organisors also provide the maskes that will be ignored during test evaluation.
We filtered out the ignore mask if predicted mesh has more than 20% overlap,
this will have around 0.002 mAP improvement.</p>
<p>Below is the aforementioned progress we have achieved in a tabular form:</p>
<p>|Method | private LB | public LB|
|:------------------: | :-------:|:-------------------------:|
|HTC + HRNet + quaternion + translation | 0.094 | 0.102|
|+ ApolloScape dataset | 0.105 | 0.110|
|+ z-> x,y (postprocessing) | 0.122 | 0.128|
|+ NMR | 0.127 | 0.132|
|conf (0.1 -> 0.8) | 0.130 | 0.136|
|+ 3 models ensemble (max) | 0.133 | 0.142|
|+ filter test ignore mask | 0.136 | 0.145|
|+ 6 models ensemble(weighted average)| 0.140 | 0.151|</p>
<h2>Other bolts and nuts</h2>
<h3>Visualisation using Open3D</h3>
<p>We also use <a href="http://www.open3d.org/">Open3d</a> to visualise the predicted validation images. The interactive 3d rendering technique allows us to examine the correctly predicted cars in the valid set. </p>
<h3>Neural Mesh Renderer (NMR)</h3>
<p>Neural 3D Mesh Renderer [4] is a very cool research which generates an approximate gradient for rasterization that enables the integration of rendering into neural networks.
After releasing of the final private LB, we found out using NMR actually gives a small improvement of the overall mAP. </p>
<h3>What we haven't tried but think it has decent potential</h3>
<ul>
<li><p>Almost all the top winning solution adopted the CenterNet [5], it's very likely that the model ensemble with CentreNet will further boost the overall performance. We realise the universal adoptation of CenterNet in this challenge. We might be too comfortable sitting
in our existing framework and the migration to fine-tune CenterNet seems a bit hassle which in return might ultimately causes us the top prize. </p></li>
<li><p>Allocentric vs. Egocentric [6].
Allocentric representation is equivariant w.r.t. to RoI Image appearance, and is
better-suited for learning. As also discussed <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127052">here</a>, the modification of orientation id done by rotation matrix that moves camera center to target car center. But we are aware from the beginning that prediction of
translation is far more difficult that rotation prediction, we didn't spend much effort in perfecting rotation regression.</p></li>
</ul>
<p><code>python
yaw = 0
pitch = -np.arctan(x / z)
roll = np.arctan(y / z)
r = Rotation.from_euler("xyz", (roll, pitch, yaw))
</code></p>
<h3>References</h3>
<ul>
<li>[1] 6D-VNet: End-To-End 6-DoF Vehicle Pose Estimation From Monocular RGB Images, Di WU et et., CVPRW2019 </li>
<li>[2] Hybrid task cascade for instance segmentation, Chen et al., CVPR2019</li>
<li>[3] Deep High-Resolution Representation Learning for Human Pose Estimation, Sun et al., CVPR2019</li>
<li>[4] Neural 3D Mesh Renderer, Hiroharu Kato et al., CVPR2018</li>
<li>[5] Objects as Points, Xingyi Zhou et al. CVPR2019</li>
<li>[6] 3D-RCNN: Instance-level 3D Object Reconstruction via Render-and-Compare, Abhijit Kundu et al., CVPR 2018</li>
</ul> |
2019 Data Science Bowl | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Booz Allen Hamilton and Kaggle for hosting such an interesting competition! I moved up from about 800th place to 50th place in 9 days before the end of the competition, and I moved up from about 80th place to 23rd place on the final day of the competition. So this week was very thrilling for me.</p>
<p>My solution was very simple. The summary is below.
1. Feature engineering (almost aggregation features)
2. Make some diverse models and stacking
3. Threshold tuning with random search</p>
<p>I introduce my solution's key points briefly.</p>
<h1>How to make train and test dataset</h1>
<p>Many kernels used <code>get_data</code> function that deals with the user behaviour data sequentially.But I thought this function made it difficult for me to make and manage features.So I made a new approach for making train dataset. Let me show this. At first, to each game session, I assigned the number of assessment that a user tried until that game session . Below is an example.
<code>
train_gs_assess_dict = {}
for ins_id, user_sample in tqdm(train.groupby('installation_id')):
assess_count = 0
for gs, session in user_sample.groupby('game_session', sort=False):
if session['type'].iloc[0] == 'Assessment':
assess_count += 1
train_gs_assess_dict[gs] = assess_count
train['assess_count'] = train['game_session'].map(train_gs_assess_dict)
</code>
Then I calculated the aggregation features for the subset of user activities before the assessment. The duration for creating train and test dataset became longer than kernel's. But this made implementation and management of features very easy.</p>
<h1>Model and stacking</h1>
<p>I created the 8 models at first level.
|model|type|target|eval metrics|corr with accuracy group|kendall's tau|
|:---:|:---:|:---:|:---:|:---:|:---:|
|LightGBM|gbdt|accuracy group|rmse|0.621|0.460|
|LightGBM|goss|accuracy group|rmse|0.568|0.433|
|LightGBM|dart|accuracy group|rmse|0.619|0.459|
|LightGBM|gbdt|accuracy|rmse|0.615|0.457|
|LightGBM|gbdt|accuracy group>2| auc|0.598|0.452|
|LightGBM|gbdt|accuracy group>1| auc|0.615|0.456|
|LightGBM|gbdt|accuracy group>0| auc|0.597|0.441|
|NN|-|accuracy group|rmse|0.600|0.444|</p>
<p>And I used Ridge Regression for stacking.
|model|type|target|eval metrics|corr coef with accuracy group|kendall's tau|
|:---:|:---:|:---:|:---:|:---:|:---:|
|Ridge Regression|-|accuracy group|-|0.628|0.467|</p>
<p>Strangely, the weight of prediction which had the best correlation coefficient with accuracy group became 0. But this stacking was so effective. It pushed me up near the gold zone.</p>
<h1>Threshold tuning</h1>
<p>A threshold was very important in this competition. At first, I used OptimizedRounder which many kernels used. But I found this function depended on the initial value, and output was likely to fall into a local solution from my experiments. So I used a random search for deciding thresholds. This approach pushed me up about 800th place. And I thought public and private dataset was very similar because my adversarial validation's AUC was around 0.5. So I selected thresholds that maximize mean-QWK for 100 datasets which were truncated randomly from train dataset.</p> |
Peking University/Baidu - Autonomous Driving | (part of) 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the competitors!<br>
Thanks the whole team <a href="https://www.kaggle.com/erniechiew" target="_blank">@erniechiew</a> <a href="https://www.kaggle.com/css919" target="_blank">@css919</a> for the great collaboration!</p>
<p>For the other part, please see <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127145" target="_blank">https://www.kaggle.com/c/pku-autonomous-driving/discussion/127145</a>.</p>
<p>code:<br>
<a href="https://github.com/4uiiurz1/kaggle-pku-autonomous-driving" target="_blank">https://github.com/4uiiurz1/kaggle-pku-autonomous-driving</a></p>
<p>My approach is based on <a href="https://github.com/xingyizhou/CenterNet" target="_blank">CenterNet</a>.</p>
<h3>Heads</h3>
<ul>
<li>heatmap[1]</li>
<li>xy offset[2]</li>
<li>z (depth)[1]</li>
<li>pose[6]: cos(yaw), sin(yaw) cos(pitch), sin(pitch), cos(roll), sin(roll)</li>
<li>wh[2]: It's not used for prediction, but PublicLB was improved by learning this as an auxiliary task.</li>
</ul>
<p>Heatmap's loss is Focal Loss, and the others are L1Loss. The weight of wh loss is 0.05. Mask regions of mask images are ignored when calculating loss.</p>
<h3>Network Architecture</h3>
<ul>
<li><a href="https://github.com/Cadene/pretrained-models.pytorch" target="_blank">ResNet18 (pretrained ImageNet)</a> + FPN (channels: 256->128->64)</li>
<li><a href="https://github.com/xingyizhou/CenterNet/blob/master/readme/MODEL_ZOO.md" target="_blank">DLA34 (pretrained KITTI 3DOP)</a> + FPN (channels: 256->256->256)</li>
<li>Input size: 2560 x 2048 (2560 x 1024)</li>
<li>Output size: 640 x 512 (640 x 256)</li>
</ul>
<p>Increasing the input size is very effective, mAP was improved dramatically.<br>
I tried deeper networks (ResNet34, 50) but not worked.</p>
<h3>Augmentation</h3>
<ul>
<li>HFlip (p=0.5): Flip images horizontally and <code>yaw *= -1, roll *= -1</code>.</li>
<li>RandomShift (p=0.5, limit=0.1): Shift images and positions (x, y).</li>
<li>RandomScale (p=0.5, limit=0.1): Scale images and positions (x, y, z).</li>
<li>RandomHueSaturationValue (p=0.5, hue_limit=20)</li>
<li>RandomBrightness (p=0.5, limit=0.2)</li>
<li>RandomContrast (p=0.5, limit=0.2)</li>
</ul>
<h3>Training</h3>
<ul>
<li>Optimizer: RAdam</li>
<li>LR scheduler: CosineAnnealingLR (lr=1e-3 -> 1e-5)</li>
<li>50epochs</li>
<li>5-folds cv</li>
<li>Batch size: 4</li>
</ul>
<h3>Post Processing</h3>
<ul>
<li>Remove mask regions from predictions by multiplying heatmap by masks.</li>
<li>NMS (distance threshold: 0.1): I'm not sure how effective this is…</li>
<li>Find duplicate images with imagehash and ensemble them. PublicLB was slighly improved.</li>
<li>Score threshold: 0.3 (for val mAP: 0.1)</li>
</ul>
<h3>Ensemble</h3>
<p>Ensemble each fold models and two models (ResNet18, DLA34) by averaging the raw output maps.</p>
<h3>Score Summary</h3>
<table>
<thead>
<tr>
<th>model</th>
<th>val mAP (tito's script)</th>
<th>PublicLB</th>
<th>PrivateLB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ResNet18 + FPN</td>
<td>0.257224305900695</td>
<td>0.118</td>
<td>0.109</td>
</tr>
<tr>
<td>DLA34 + FPN</td>
<td>0.2681900383192367</td>
<td>0.118</td>
<td>0.112</td>
</tr>
<tr>
<td>Ensemble</td>
<td>0.27363538075234406</td>
<td>0.121</td>
<td>0.115</td>
</tr>
</tbody>
</table>
<h3>What Didn't Work</h3>
<ul>
<li>Pseudo labeling</li>
<li>TTA (hflip)</li>
<li>Weight Standardization</li>
<li>Group Normalization</li>
<li>Deformable Convolution V2</li>
<li>Quaternion + L1Loss</li>
<li>Very large input size (3360 x 2688)</li>
<li>Eigen's depth prediction method used in CenterNet paper (<code>z = 1 / sigmoid(output) − 1</code>)</li>
</ul> |
Understanding Clouds from Satellite Images | Private LB 0.66758 solution: From single Network multi folds | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry, I took a while to write this. I was struggling with my course exam 👀 </p>
<p>Congratulations to all the participants and winners! This competition was a little bit tricky and different, especially due to the very noisy label in the dataset. I am going to share my solution which outperformed all of my other networks and ensembles. </p>
<h2>Score and Network Summary</h2>
<p><strong>Private score:</strong> 0.66758
<strong>Public score:</strong> 0.66654 (<em>Yes! The network gave higher score in private!</em>)
<strong>Network backbone:</strong> InceptionResNetV2
<strong>Segmentation Model:</strong> FPN</p>
<p><strong>Train height:</strong> 384
<strong>Train weight:</strong> 512
<strong>Loss:</strong> Lovasz
<strong>Optimizer:</strong> Radam
<strong>Learning rate:</strong> 1e-4
<strong>Pretrained weight:</strong> Imagenet</p>
<h3>Augmentations:</h3>
<p>I used albumentations. </p>
<p><strong>TTA:</strong>
1. HorizontalFlip, VerticalFlip
2. ShiftScaleRotate
3. Blur
4. ToGray
4. RandomBrightnessContrast, RandomGamma, RGBShift, ElasticTransform (One of them)</p>
<p><strong>Post-processing (For Prediction) Augmentation:</strong>
- HorizontalFlip, VerticalFlip, ToGray, RandomGamma, Normalize. </p>
<h3>Post-Processing confidence density:</h3>
<p>For all classes:
<strong>Upper</strong>: 0.6
<strong>Lower</strong>: 0.4
<strong>Area</strong>: 100
<strong>Min area</strong>: 200</p>
<h3>Folds and weights</h3>
<p>I have used a total of 4 folds and 4 weights from each fold. So there was a total of 16 weights from 4 folds and the prediction was the simple average of 16 folds. </p>
<p><strong>There was no other ensemble or use of any extra classifier.</strong></p>
<h3>GPU</h3>
<p>I didn't have any good GPU locally. So I used the Google cloud in the last few days. Google gave $300 dollar free trial credits which were good for around 120 hours VM instance.</p>
<p>I build a VM there with V100 GPU. </p>
<h3>Some thoughts</h3>
<ol>
<li>I believe this network performs well due to the InceptionResNetV2 only. Because I have output from a similar setup with other backbones which wasn't anywhere near to this score. </li>
<li>I should have trusted the validation score. This model produced the highest Kaggle dice score at validation compared to other models. </li>
</ol> |
2019 Data Science Bowl | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>congratulations to all winning teams (either winning medels or knowledge and fun). Many thanks to organizers for providing interesting data and execellent platform, also tons of thanks to generous kagglers for their sharing in disscusion and kernels. Waking up and konwing I had my first solo gold as my first medal in kaggle is just.. too good to be true😃 </p>
<h3><strong>Model</strong></h3>
<p>My model is pretty simple. 3 layer mlp with 256-256-256 topology, BN and 0,3 dropout rate, everywhere. 3 leaky relu activation + 1 linear. That’s it.
Besides accuracy _group, I use 3 times sqrt of accuracy as another target, hoping it can provide more information than just 0123 values and reduce overfitting, but seem like it don’t have very large effect on the score. </p>
<h3><strong>Validation method</strong></h3>
<p>5 groupkfold, mainly watch inversely weighted oof qwk, but also not weighted oof qwk. inversely weighted is like I described in discussion. I am not very sure if it perfectly mimic test data like truncating, but it runs fast. I had some disalignment among lb and cv. I guess they might just align at ~0.01 level. So I am pretty lucky</p>
<h3><strong>Preprocess</strong></h3>
<p>Log transform and then std transform on numeric features.
Impute missing with zeros and encode missings of features into 0,1 as new features. </p>
<h3><strong>Training</strong></h3>
<p>The best score obtaibed by using private data. I gave it a bet since private data could give 2 times amount of data and NN is data hungry. I know its distribution is different, but distribution of trainning data is also different from truncated test anyway. </p>
<p>Submission A uses both private and trainning data (0.559 private lb)
Submission B uses trainning data (0.552 private lb)</p>
<p>Training 9 models with all data with different seeds and slightly different epochs (63, 65, 68) </p>
<p>Adam optimizer, 128 batchsize, 0.0003 LR with cyclic decay:
<code>
def lr_decay(index_):
if index_ &lt; 15:
return 0.0003
elif index_ &lt; 30:
if index_ % 2 ==0:
return 0.00008
else:
return 0.0002 <br>
elif index_ &lt; 40:
if index_ % 2 ==0:
return 0.00008
else:
return 0.00003 <br>
else:
return 0.00003
</code></p>
<h3><strong>Postprocess</strong></h3>
<p>Simple average 18 of predictions of 9 models (2 outputs, acc and acc_group per model). then use threshold Optimizer to find thresholds. I randomly initiallized the thresholds for threshold Optimizer around training target distribution, and ran threshold Optimizer 25 times, then chose the one with best cv qwk.</p>
<p>I did a 5 fold simulation(4 folds act as oof we have, 1 fold acts as label of test data) to compare several ways of deciding thresholds. Found that using threshold Optimizer is better than deciding thresholds by simple using training target distributiion.</p>
<h3><strong>Features</strong></h3>
<p>I generated ~1100 features, and selected 216 according null importance by using rf mode in lgbm, introduced by <a href="/ogrellier">@ogrellier</a> in his great notebook <a href="https://www.kaggle.com/ogrellier/feature-selection-with-null-importances">here</a>. I found that use ~100 features gave better cv score(~0.563) than 216 features(0.559), but also low training loss and larger valid-trainning loss gap, which might indicated larger overfit. And 216 feature version have better score on LB. I chose to use 216 features in both final submissions. </p>
<p>Main feature list:</p>
<ul>
<li><p>Type, title, event counting,</p></li>
<li><p>event_id counting, </p></li>
<li><p>title_acc, title_acc_lasttime</p></li>
<li><p>title duration max/mean/std (I cliped title duration at 1000, I think 16mins is already quite long for a kid to play a session. Those duration outlier might be errors in recording. Anyway I don’t think a kid can play a session for 3 hours),
title_misses mean/std, </p></li>
<li><p>title_round_misses_mean_divided_by_round_duration(reflect acc vs speed infomation),</p></li>
<li><p>nunique_title, </p></li>
<li><p>nunique_title_in_this_world(world reflect certain facet of kid’s ability, like knowledge in length, knowledge in speed, etc)</p></li>
<li><p>session_sum, event_sum, </p></li>
<li><p>game_tried_ratio(# game with try devided by # game), event_4070_ratio(# 4070 devided by # events)</p></li>
<li><p>title_distraction_mean.( basically is like what I did in my previous <a href="https://www.kaggle.com/zgzjnbzl/visualizing-distraction-and-misclicking">notebook</a>. I count the all kids’s 4070 events and their coordinates in title_heatmaps. The inverse of counting of 4070 events on heatmaps on a certain position is the distraction-score of this 4070 event. I assume that misclicks happens in small regions around target object, and distraction could happen everywhere)</p></li>
<li><p>Binning assessment_title counting and accuracy max ino 0 and 1(ever played vs never played, ever passed VS never passed).</p></li>
<li><p>If_skip(binray features indicated if skip into this assessment from title in not designed order )</p></li>
<li><p>If_repeat(binray features indicated if last session was also this assessment)</p></li>
</ul>
<h3>**Some other thoughts</h3>
<p>I fixed the memory issues of preprocessing private data at very very last time, and submit it 8 hours before competition deadline and it was running in submission for 6 hours. I made my solution literally 2 hours before the deadline. yes people are always saying don't give up too early and now I believe it😅</p>
<h3>code:</h3>
<p>resubmitted kernel here:
<a href="https://www.kaggle.com/zgzjnbzl/dsb-mlp-216-feature">https://www.kaggle.com/zgzjnbzl/dsb-mlp-216-feature</a></p> |
2019 Data Science Bowl | 44 place writeup(Catboost ranking with eventdata) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, thanks Kaggle and Booz Allen Hamilton for such a great competition.
It was an interesting problem with lots of challenges and I learned a lot.</p>
<p>Here is my solution and observations:</p>
<p><strong>Loss function and framework</strong>
I use ranking loss(<code>PairLogitPairwise:max_pairs=1000000</code>) with CatBoost, depth 6 or 7, training on GPU.
The other hyperparameters are default.
I train the model 5 fold and then blend all 5 models with CatBoost <code>sum_model</code>to produce the average prediction.
So I obtain a quasi single model solution.
I try to predict <code>accuracy_group</code>. My attempts to predict <code>accuracy</code>, or <code>num_correct</code>and <code>num_incorrect</code>as targets didn't work well.</p>
<p><strong>Validation</strong>
5-fold truncated cross validation where I perform truncation 5 times for each fold and average the score.</p>
<p><strong>Threshold selection</strong>
After the model is blended I predict the whole training set and optimize the threshold to maximize the Kappa.
I do it 5 times and then take the median value for each sample.</p>
<p><strong>Features generation</strong>
I've generated about 1000 features. Among them are:
1. Overall accumulated counters of event_codes and event_ids.
2. Various accumulated accuracy statistics.
3. Timestamp month and hour
4. Linear extrapolation of accuracy.
5. Features extracted from event data:
a. Overall sum and mean value for each key that has numeric value except coordinates.
b. The same statistics groupped by event title.</p>
<p>Here is the few top features by SHAP importatance to illustrate the idea:
- <code>lastAssessmentTitle</code>
- <code>misses_mean</code>
- <code>Bird Measurer (Assessment)_stage_number_mean</code>
- <code>accuracy_mean</code>
- <code>4070_count</code>
- <code>Sandcastle Builder (Activity)_total_duration_mean</code>
- <code>IsAssessmentAttemptSuccessfull_Chest Sorter (Assessment)</code>
- <code>Clip_count</code>
- <code>6bf9e3e1_count</code></p>
<p><strong>Feature selection</strong>
Features of group 5b (like <code>Bird Measurer (Assessment)_stage_number_mean</code>) lead to a heavy overfitting for training set. To mitigate that two approaches work:
1. Select top 150-200 features by shap.
2. Drop features using truncated adversarial validation untill ROC AUC becomes ~0.5. That leaves 863 features.</p>
<p><strong>Submission selection</strong>
I've submitted the most stable blend of 3 models that vary by the selected features and produce 0.555-0.56 at public LB. That produced .552 private and 44 place.
I have few single model and blend submissions for .553 and .554, so my final submission was quite close to optimal and I get a fair Silver.</p> |
Understanding Clouds from Satellite Images | 19th place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats for all the prize winner and who finished in gold zone!</p>
<p>I joined this competition relatively lately, after Severstal competition finished.(I believe same as many people, don't you?)
My purpose was to make sure the segmentation pipeline I made in Severstal works for other competition. And it turned out it actually works, I just modified directory and some small parameters. That means my solution is not so special, honestly.
<br></p>
<h3>Overview</h3>
<ul>
<li>Extremely noisy annotation</li>
<li>Not so imbalanced classes (compared with Severstal)</li>
<li>Relatively small data(number of samples)</li>
<li><p>Good train/test split(cv works)
<br></p>
<h3>What works</h3></li>
<li><p>Unet & FPN</p></li>
<li>not so large encoder</li>
<li>BCE + Dice loss</li>
<li>heavy augmentation(including mixup)</li>
<li>cosine anealing</li>
<li>ensemble many models</li>
<li><p>Triplet thresholding(label threshold/mask threshold/min componet)
<br></p>
<h3>What didn't works</h3></li>
<li><p>PSPNet</p></li>
<li>large image size(over 448*672)</li>
<li>plane BCE</li>
<li>pseudo labeling</li>
</ul>
<h3>Solution</h3>
<ol>
<li>Unet/efficientnet-b3/image size 320x480/5fold</li>
<li>Unet/efficientnet-b0/image size 320x480/cosineanealing/5fold</li>
<li>Unet/efficientnet-b3/image size 384x576/cosineanealing/5fold</li>
<li>FPN/resnet34/image size 384x576/mixup/5fold</li>
<li>Ensemble above 20 models</li>
<li>Triplet thresholding(label threshold/mask threshold/min componet)</li>
</ol>
<hr>
<p>Here is my code. <br>
If you have question, please feel free to ask:)
Thanks!</p>
<p><a href="https://github.com/bamps53/kaggle-cloud-2019">https://github.com/bamps53/kaggle-cloud-2019</a></p> |
RSNA Intracranial Hemorrhage Detection | 1st Place Solution. Sequential model wins | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The key module of our pipeline is a sequence model. It works well and there is no shakeup.
Code : <a href="https://github.com/SeuTao/RSNA2019_1st_place_solution">https://github.com/SeuTao/RSNA2019_1st_place_solution</a></p>
<h1>Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2F550d2dfb85771f93e0f2c8f1dbc2f62c%2Fsequence%20model%20with%20big%20bar%20align.png?generation=1574528142297990&alt=media" alt=""></p>
<h1>2D CNN Modeling</h1>
<p><strong>Data pre-processing & augmentation</strong>
Our team has three 2d classifier pipelines. The three pipelines share different input settings (3 channels):
<code>
1. Single sclice with 3 windows.
2. Spatially adjacent 3 slices with one window.
3. Combination of 1 and 2: Spatially adjacent 3 slices with three windows.
</code>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2Fd22fed8b4fba54c637627bacb41666ad%2F2019-11-24%203.51.01.png?generation=1574582336956604&alt=media" alt=""></p>
<p>The windows we use are:
<code>
Brain Window[40, 80],
Subdural Window[80, 200],
Bone Window[600, 2800]
</code>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2F93bbbae12be594c19554a35956fe85be%2F1.png?generation=1574581640882170&alt=media" alt=""></p>
<p>Augmentations:
- Random ShiftScaleRotate
- Random resize crop
- Random HFlip</p>
<p>Training strategy
- Randomly sample images form different SeriesInstanceUID
- Each epoch was trained on 4 times SeriesInstanceUIDs
- Adam optimiser with cycle learning rate (5e-4~1e-5)</p>
<h1>Sequence Model Development</h1>
<p><strong>Sequence model 1: MLP + LSTM</strong>
Input:
- Slice embeddings from multi models (num_models*feature dim)
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2Fcdc6c8d3af12a81695dd50da3d930cb8%2Fsequence%20model%201.png?generation=1574645166546352&alt=media" alt=""></p>
<p><strong>Sequence model 2: 1d CNN + LSTM</strong>
Input:
- Logits from multi 2D CNN models (num_models*6 class output)
- Logits from sequence model 1 (6 class output)
- Meta info (Position)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2F4b20e615b4331031628e2b27f5a9ddf2%2F2.png?generation=1574582851737685&alt=media" alt=""></p> |
Understanding Clouds from Satellite Images | Krazy Klassifiers - 48th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>If you could predict empty mask for every empty mask and predict full mask, (i.e. predict every pixel with <code>rle = '1 183750'</code>) for every mask, then your CV is 0.686!! Therefore a perfect classifier can win without any segmentation. The following code outputs 0.686:</p>
<pre><code>train = pd.read_csv('../input/understanding_cloud_organization/train.csv')
train['pred'] = np.where(~train.EncodedPixels.isna(),'1 183750','')
train['dice'] = train.apply(lambda x: kaggle_dice(x['EncodedPixels'],x['pred']),axis=1)
print( train.dice.mean() )
</code></pre>
<h1>Classification Models</h1>
<p>I focused most of my energy on building classification models and finally achieved 78% classification validation accuracy (on 33% holdout set, i.e. 3-Fold CV) by ensembling two crazy classifiers. The first has 4 backbones that extract features from 4 different resized input images (half size, quarter size, one sixth size, and one eighth size)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fdef7ae2825082c95e50f22674c562124%2Fcls1.jpg?generation=1574169946410492&alt=media" alt=""></p>
<pre><code>base_model0 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
base_model1 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
base_model2 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
base_model3 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
x0 = base_model0.output
x0 = layers.GlobalAveragePooling2D()(x0)
x1 = base_model1.output
x1 = layers.GlobalAveragePooling2D()(x1)
x2 = base_model2.output
x2 = layers.GlobalAveragePooling2D()(x2)
x3 = base_model3.output
x3 = layers.GlobalAveragePooling2D()(x3)
x = layers.concatenate([x0,x1,x2,x3])
x = layers.Dense(4,activation='sigmoid')(x)
model = Model(inputs=(base_model0.input, base_model1.input, base_model2.input,
base_model3.input), outputs=x)
</code></pre>
<p>My second model uses masks in addition to labels and achieves 77% accuracy by itself. The label loss is backpropagated through the mask prediction. Then instead of using the outputted labels, we predict 1 or 0 for label based on whether mask is present or not.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F2445cef6b03266046473736d3d4a8914%2Fcls2.jpg?generation=1574170027626081&alt=media" alt=""></p>
<pre><code>model0 = Unet('resnet34', input_shape=(None,None,3), classes=4,
activation='sigmoid', encoder_freeze=True)
model0.layers[-1].name = 'out1'
x = model0.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(4, activation='sigmoid', name='out2')(x)
model = Model(inputs = model0.input, outputs = (model0.output,x))
model.compile(optimizer=opt, loss={'out1':loss1,'out2':loss2},
metric = {'out1':metric1, 'out2':metric2})
</code></pre>
<h1>Segmentation Model</h1>
<p>My segmentation model is a collage of ideas from public kernels. Without post process, it achieves Public LB 0.650. Test time augmentation (TTAx6) increases this to LB 0.655. Using 3-Folds increases this to LB 0.660. Ensembling 7 copies with different choices for 3-Fold achieves LB 0.665. And finally removing false positives with my classifier increases this to LB 0.670. My final solution has CV 0.663 and Private LB 0.663. Here are specific details:</p>
<ul>
<li>Unet Architecture</li>
<li>EfficientnetB2 backbone</li>
<li>Train on 352x544 random crops from 384x576 size images</li>
<li>Train augmentation of flips and rotate</li>
<li>Adam Accumulate optimizer</li>
<li>Jaccard loss</li>
<li>Kaggle Dice metric, Kaggle accuracy metric</li>
<li>Reduce LR on plateau and early stopping</li>
<li>Remove masks less than 20000 pixels</li>
<li>TTA of flips and shifts</li>
<li>3-Fold CV and prediction</li>
<li>Remove false positive masks with classifier</li>
</ul>
<h1>Kaggle Notebook</h1>
<p>I posted a Kaggle notebook showing my segmentation model <a href="https://www.kaggle.com/cdeotte/cloud-solution-lb-0-670">here</a>. It scores LB 0.665 by itself and LB 0.670 if you ensemble it with 7 copies of itself with different initialization seeds. It loads classification predictions from my offline classifier models for false positive removal.</p>
<p>Thank you everyone for a fun and exciting competition. I learned a lot from reading everyone's discussions and posted code. Thank you Kaggle and Max-Planck-Institite for sharing cloud data and hosting. Congratulations to all the winners.</p> |
ASHRAE - Great Energy Predictor III | 5th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we thank the kaggle and ASHRAE teams for holding the competition!</p>
<p>I'm sorry my post was delayed due to my laziness.</p>
<p>Our solution is not sophisticated and I think there is much room for improvement.
(I want to apologize in advance, I don't use English in everyday life,
so I'm not good at English)</p>
<p>
Below is an overview of our solution.
</p>
<h2>Pre-proceeding</h2>
<p>We dropped rows such as
* Long streaks of constant values
* Zero target values (only electricity) </p>
<p>By removing these data, the score was greatly improved.</p>
<h2>Feature Engineering</h2>
<p>We try two kinds of target encoding</p>
<h3>1. percentile for each building_id, meter</h3>
<p>As shown in the figure, the 5th and 95th percentile of the target value was calculated for each building_id and meter, and we used these features.
</p>
<p>In our case, these features improved the score.</p>
<h3>2. propotion</h3>
<p></p>
<p>For each building_id, we apply these process.
* Calculate median of target value per day of week.
* Calculate its proportion (see figure).</p>
<p>This is an example of day of week. We also apply this technique to hour, day, and so on.</p>
<h2>Modeling</h2>
<ul>
<li>Using only LightGBM(train for each meter)
We apply two-step modeling
<h3>Step1: Determine num_boost_round for each building_id</h3></li>
</ul>
<p></p>
<ul>
<li>Define training data(2016/01/15 ~ 2016/05/31) and validation data(2016/09/01 ~ 2016/12/31)</li>
<li>Training with LightGBM and find early stopping round for each building_id(n1 ~ n1448).</li>
</ul>
<h3>Step2: Train with all train (year 2016) data and predict test data</h3>
<p></p>
<p>Training with all train data and predict test data.
The number of trees used for prediction were changed for each building_id (using n1~n1448 obtained in step1).</p>
<p>This approach improved the public score, but the private score did not improve much.</p>
<h2>Ensemble</h2>
<ul>
<li>Used leaked data(site 0,1,2,4,15).</li>
<li>Weighted average for each meter and year(2017,2018).</li>
</ul>
<p>We also used other competitor's submission files.
sub1: <a href="https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks">https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks</a>
sub2: <a href="https://www.kaggle.com/rohanrao/ashrae-half-and-half">https://www.kaggle.com/rohanrao/ashrae-half-and-half</a></p>
<h2>Submission</h2>
<p>After ensemble, we achieve 1.047 on public LB) / 1.236 on private LB
(1.058 on public LB / 1.272 on private LB in our single model)</p>
<p><br>
If you have any questions, feel free to ask.
Thank you for reading.</p> |
RSNA Intracranial Hemorrhage Detection | 4th Place Solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Code: <a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution">https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution</a>
Our code is based on Appian's repo: <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage">https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage</a></p>
<h1>Overview of the proposed method</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F29e2a5b18ea11e273e34e3fd1ffff119%2Foverview.png?generation=1574318153545078&alt=media" alt=""></p>
<p>Our solution includes two stages. We train 2D CNN models in stage 1 for feature extraction, and 1D + 3D CNN models in stage 2 for classification.</p>
<h2>Preprocess</h2>
<ol>
<li>Two window policies:
a) use Appian’s windowing policy
i. Three windows are: [40, 80], [80, 200], [40, 380][<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/1e9b6a5bb46d1d329f4af04e9066a3a0b7fa7769/IFE_1/src/cnn/dataset/custom_dataset.py#L68">link</a>]
b) Stack three consecutive slices to a 3-channel image. [<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/1e9b6a5bb46d1d329f4af04e9066a3a0b7fa7769/IFE_3/src/cnn/dataset/custom_dataset.py#L97">link</a>]
i. Window: [40, 80] </li>
<li>Remove corrupted images </li>
<li>Filter out blank images by
a) Obtain the difference between maximum and minimum intensity value of each image, i.e., the intensity range, after applying a custom windowing scheme (center = 40, window = 80)
b) Remove images with intensity range < 60 from both training and test sets.
c) The removed test images will be classified as negative during post-processing.</li>
<li>Extract useful meta data from dicom files
a) Patient ID
b) StudyInstance ID
c) SeriesInstance ID
d) Position2</li>
<li>Make patient-wise stratified five folds
a) Images from one patient always belong to the same fold
b) Class distributions are roughly the same across different folds</li>
</ol>
<h2>STAGE 1: 2D Image Feature Extraction</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F69ccd1b230d567e1c473413541545b27%2FfeatureExtraction.png?generation=1574240192793088&alt=media" alt=""></p>
<h3>1. Training strategy:</h3>
<p>a) Randomly split the training dataset into 5 folds and train the model five times. Use 4 folds as training set and 1 fold as validation set each time.</p>
<h3>2. Models</h3>
a) EfficientNet B0
<p>i. ImageNet pretrained
ii. Input image size: 512x512
iii. Augmentation: random crop, random hflip, random rotate, random contrast
iv. 5-fold training
v. TTA5: random crop, random hflip , random rotate, random contrast</p>
b) ResNext50 32x4d swsl
<p>i. Semi-Supervised and Semi-Weakly Supervised ImageNet Models <a href="https://github.com/facebookresearch/semi-supervised-ImageNet1K-models">https://github.com/facebookresearch/semi-supervised-ImageNet1K-models</a>
ii. Input image size: 448x448
iii. Augmentation: random crop, random hflip , random rotate, random contrast, pixel and window jittering
iv. 5-fold training
v. Cosine learning rate scheduler
vi. TTA5: random crop, random hflip , random rotate, random contrast</p>
<h3>Summary of stage 1 models:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F9e735ab0c676de4ca8c1eb94df51404e%2FIFLmodels.png?generation=1574238657232358&alt=media" alt=""></p>
<h2>STAGE1: Meta Data Feature Engineering</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F22895a3229aa1374353d025496988e4e%2Fmetadata.png?generation=1574245207505107&alt=media" alt=""></p>
<h2>STAGE2: Slice Sequence Model</h2>
<p>In stage2, we train 1D CNN model and 1D+3D CNN models for classification.</p>
<h3>1D CNN model:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Ff3f240c485b348322e17a5081252863c%2F1Dconv.png?generation=1574239313394661&alt=media" alt=""></p>
<h3>1. Pipeline:</h3>
<p>a) Extract 1D feature and metadata from stage 1
b) Stack the features that belong to one CT series together.
c) Pass the stacked feature to customized fully convolutional neuronal networks, and generate the output. [<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/ed1c6f59b3077e3c8226671a5d9c38c2028aab5d/cls_2/src/cnn/models/model.py#L22">link</a>]</p>
<h3>2. Augmentation:</h3>
<p>a) No data augmentation</p>
<h3>3. Training strategy</h3>
<p>a) Follow 2D CNN’s fold split</p>
<h3>1D+3D CNN model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Fb5e6427293552f7563b11f71610203ea%2F1D3D.png?generation=1574239073870529&alt=media" alt=""></p>
<h3>1. Pipeline:</h3>
<p>a) Extract metatdata, 1D and 3D features from stage 1.
b) Stack the features that belong to one CT series together.
c) Pass the stacked feature to customized fully convolutional neuronal networks, and generate the final output. [<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/ed1c6f59b3077e3c8226671a5d9c38c2028aab5d/cls_1/src/cnn/models/model.py#L141">link</a>]</p>
<h3>2. Augmentation:</h3>
<p>a) No data augmentation</p>
<h3>3. Training strategy</h3>
<p>a) Follow 2D CNN’s fold split</p>
<h3>Summary of Stage 2 models:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Fa93f38cf0b81ef5ec96ad0239c37b410%2Fclsmodel.png?generation=1574246559021560&alt=media" alt=""></p>
<h2>Ensemble Predictions</h2>
<p>4 x 5 x 5 = 100 Predictions from
1. 4 Models
a) Cls_1a trained on Fold_Set_a,
b) Cls_1b trained on Fold_Set_b,
c) Cls_2 trained on Fold_Set_a,
d) Cls_3 trained on Fold_Set_c
2. 5 Folds per Fold Set
3. 5 TTA</p>
<h2>Post-processing</h2>
<ol>
<li>Assign the minimum value over all predictions to the blank test images</li>
<li>Clip the predicted value to the range of [1e-6, 1-1e-6]</li>
<li>Convert the predictions to the required submission format</li>
</ol>
<h2>Score Growth Chart</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Fba114abed56147359750a1da8a20c75a%2Fscorechart.png?generation=1574245503408955&alt=media" alt=""></p>
<p>Acknowledgement: Our code is based on Appian’s repo. <a href="/appian">@appian</a> Thank you very much for your great and beautiful work!</p> |
Peking University/Baidu - Autonomous Driving | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats everyone with excellent result!
I summarize and write down the part of my solution and our post process.
For other part:
<a href="https://www.kaggle.com/bamps53">camaro</a> part: <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127056">(part of) 7th place solution with code</a>
<a href="https://www.kaggle.com/hesene">Jhui He</a> and <a href="https://www.kaggle.com/lanjunyelan">yelan</a> part: <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127034#726362">https://www.kaggle.com/c/pku-autonomous-driving/discussion/127034#726362</a></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1620223%2F6062aa1f543d25daab45fa475adb223d%2Fdrive_pipeline%20(6" alt="">.jpg?generation=1579652111438826&alt=media)</p>
<h2>Model1: detection and pose estimation</h2>
<h3>common setting</h3>
<p><strong>detection model</strong>
- Mask RCNN(mask head removed)
- backbone: resnext101-32x4d
- lvis pretrained</p>
<p><strong>pose estimation model</strong>
- HRNet-w18c, efficientnetb0/b3
- imagenet pretrained</p>
<p><strong>Loss</strong>
- classification: BCE
- detectoin: Focal Loss
- pose regression: L1 Loss</p>
<p><strong>detection: Optimizer and scheduler</strong>
- optimizer: SGD(lr=0.01, momentum=0.9, weight_decay=1e-4, nesterov=True)
- scheduler: CosineAnnealingWarmRestarts</p>
<p><strong>pose estimation: Optimizer and scheduler</strong>
- optimizer: Adam(lr=0.0001)
- scheduler: None</p>
<p><strong>Augmentation</strong>
- detection: horizontal flip
- pose estimation: horizontal flip, shift, rotate, random blightness/contrast</p>
<h3>1. pretrain on the boxy-vehicle-dataset</h3>
<p>At first, I train my model on the <a href="https://boxy-dataset.com/boxy/">boxy-vehicle-dataset</a>.
This dataset include axis aligned bounding box and 3d cuboids, but I use only 2d bbox.<br>
<strong>training setting</strong>:
- image resolution: 1232x1028
- epochs: 10
- batch_size: 4</p>
<h3>2. finetune on competition dataset</h3>
<p><strong>Model</strong>
- add depth head on top of model</p>
<p><strong>preprocess</strong>
- split train vs val = 9 vs 1
- create 3d bbox using label, then create axis aligned bbox
- depth -> 1 / sigmoid(depth) - 1</p>
<p><strong>training setting</strong>
- image resolution: 800 x 2800, 1400x3300
- depth loss: L1 Loss
- epochs: 50
- batch_size: 4</p>
<h3>3. pose estimation(yaw_sin, yaw_cos, pitch)</h3>
<p><strong>preprocess</strong>
- crop image by bbox and resize</p>
<p><strong>training setting</strong>
- image resolution: 320x480
- epochs: 30
- batch_size: 128</p>
<p>public LB/private LB
- 800x2800, single fold: 0.119/0.106
- 1400x3300, single fold: 0.106/0.111</p>
<h1>Model2: centernet</h1>
<p><strong>model</strong>
- <a href="https://github.com/xingyizhou/CenterNet">pytorch dla centernet</a>.
- regression of yaw_sin, yaw_cos, pitch, depth, 2d bbox size(w, h), 3d bbox size(w, h, l)
- classification of object centerness(heat map)</p>
<p><strong>Loss</strong>
- regression: L1 Loss
- classification: Focal Loss</p>
<p><strong>optimizer and scheduler</strong>
- optimizer: Adam(lr=5e-4)
- scheduler: CosineAnnealingLR(lr=5e-5)</p>
<p><strong>Augmentation</strong>
- pose estimation: horizontal flip, shift, random blightness/contrast</p>
<p><strong>preprocess</strong>
- depth -> 1 / sigmoid(depth) - 1</p>
<p><strong>training setting</strong>
- split train vs val = 8 vs 2
- epochs: 30
- batch_size: 12</p>
<p>public LB/ private LB
- single fold: 0.100/0.096</p>
<h1>Ensemble: Linear assignment</h1>
<p>We use different model(faster rcnn, centernet), so it is difficult to ensemble predicitons.
So we decided to ensemble nearset points between predictions.
We use hungalian algorithm for linear assignment.
Please refere to below code.
```
from scipy.optimize import linear_sum_assignment
distance_th = 30
yaw_th = 10</p>
<p>sub1 = pd.read_csv('sub1.csv')
y1 = sub1['PredictionString'].str.split(' ').values
X1 = sub1['ImageId'].values</p>
<p>sub2 = pd.read_csv('sub2.csv')
y2 = sub2['PredictionString'].str.split(' ').values
X2 = sub2['ImageId'].values
for idx in tqdm(range(len(sub1)), position=0):
if str(np.nan) != str(y1[idx]) and str(np.nan) != str(y2[idx]):
label1 = np.array(y1[idx]).reshape(-1, 7).astype(float)
label2 = np.array(y2[idx]).reshape(-1, 7).astype(float)
center_points1 = get_imgcoords(label1) # [N, 3], (img_x, img_y, img_z)
center_points2 = get_imgcoords(label2)</p>
<pre><code> cost_matrix = np.zeros([len(center_points1), len(center_points1)])
for idx1, i in enumerate(center_points1):
for idx2, j in enumerate(center_points2):
cost_matrix[idx1, idx2] = np.linalg.norm(i - j)
match1, match2 = linear_sum_assignment(cost_matrix)
for i, j in zip(match1, match2):
if cost_matrix[i, j] &lt; distance_th:
if np.abs(label1[i][1] - label2[j][1]) &lt; yaw_th:
label1[j] = (label1[i] + label2[j]) / 2
label1[idx] = np.concatenate(tmp1).astype(str)
</code></pre>
<h1>use label1 for submission</h1>
<p>```</p> |
2019 Data Science Bowl | 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Approach</h1>
<ul>
<li>The best part of the competition for me was feature engineering. In the end I used 51 features, truncated down from 150.</li>
<li>By far strongest features were base distributions of each Assessment. However, they were used by everyone. Individual features, given small data, were not so important, but they decided winners I think.</li>
<li>Final model was an ensemble: 0.3 LGB, 0.3 CATB, 0.4 NN</li>
<li>20 fold-bagging for all models, for NN additionally averaging across 3 seeds </li>
<li>One “trick” – use assessments from test set “blindly” (as we did not see this data) as samples in training. Data was scarce – so I was looking for every way to increase the number of training samples. Especially that you add data exactly for the children that are in the private LB. </li>
</ul>
<h1>Results</h1>
<ul>
<li>Truncated CV: 0.575</li>
<li>Private LB: 0.559</li>
<li>Public LB: 0.559</li>
</ul>
<h1>Final Remarks</h1>
<ul>
<li>Congrats to winners - looking forward to your solutions</li>
<li>Thanks to the organizers for the competition with event data – love those 😊 It unleashes your creativity in feature engineering</li>
</ul>
<h1>Validation setup</h1>
<ul>
<li>Truncate - select one assessment randomly for every child to reflect test set structure</li>
</ul>
<h1>Update 1: Features</h1>
<p>Motivation:
I was impacted by the ideas presented in this paper by Francois Chollet: <a href="https://arxiv.org/abs/1911.01547">On the measure of intelligence</a>. There are tons of interesting and powerful thoughts there. I was mostly stimulated by a discussion on how to measure intelligence:
- A/ by overall-skill-level
- B/ by skill-acquisition-tempo</p>
<p>In our case, we are measured by A/, which can be broken into two drivers:
- experience, i.e. how much time/effort the child has spent on various actitivities in the game. This formed my first group of features
- accuracy - how accurate was this child in her journey. This formed my second group of features.</p>
<p>However, skill-acquisition-tempo is a very interesting way to capture how quickly children are learning (features like minutes per level, events per level, etc.). This formed by 3rd group of features</p>
<p>I love competitions with manual feature engineering. Combination human+machine wins, which represents my view on how AI will impact the world.</p>
<h1>Update 2: Feature selection</h1>
<ul>
<li>Calculated cv score after dropping a feature - did this individually for all ~150 features</li>
<li>Dropped all features which brought an improvement of less than 0.0001 on QWK score - I treat them as noise. Found ~100 noise features in this way.</li>
<li>Recalcuated CV once more to see that overall score improved slightly after removing 100 noise features</li>
</ul> |
TensorFlow 2.0 Question Answering | 27th solution with luck and some questions from me | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks kaggle provides this competition and this is a big improvement for me to take an competition solo and reach this place.</p>
<h2><strong>1. Wierd ROBERTA Achitecture</strong></h2>
<p>Base from (<a href="https://github.com/bojone/bert4keras">https://github.com/bojone/bert4keras</a>) My solution is based on a wierd ROBERTA structure inplemented in keras (<a href="https://www.kaggle.com/httpwwwfszyc/bert4keras4nq">https://www.kaggle.com/httpwwwfszyc/bert4keras4nq</a>) as it is different from the huggingface in
1. 2*1024 token_type_id_embedding layer (0 put pretrained weights, 1 set to np.zeros((1,1024)))
2. no padding masking
3. wierd zero masking in attention mask:
```
def build(self):
x_in = Input(shape=(512, ), name='Input-Token')
s_in = Input(shape=(512, ), name='Input-Segment')
x, s = x_in, s_in</p>
<pre><code> sequence_mask = Lambda(lambda x: K.cast(K.greater(x, 0), 'float32'),
name='Sequence-Mask')(x)
# Embedding
x = Embedding(input_dim=self.vocab_size,
output_dim=self.embedding_size,
embeddings_initializer=self.initializer,
name='Embedding-Token')(x)
s = Embedding(input_dim=2, #1 or 2 , 2 finally because roberta need to train it
output_dim=self.embedding_size,
embeddings_initializer=self.initializer,
name='Embedding-Segment')(s)
x = Add(name='Embedding-Token-Segment')([x, s])
if self.max_position_embeddings == 514:
x = RobertaPositionEmbeddings(input_dim=self.max_position_embeddings,
output_dim=self.embedding_size,
merge_mode='add',
embeddings_initializer=self.initializer,
name='Embedding-Position')([x,x_in])
else:
x = PositionEmbedding(input_dim=self.max_position_embeddings,
output_dim=self.embedding_size,
merge_mode='add',
embeddings_initializer=self.initializer,
name='Embedding-Position')(x)
x = LayerNormalization(name='Embedding-Norm')(x)
if self.dropout_rate &gt; 0:
x = Dropout(rate=self.dropout_rate, name='Embedding-Dropout')(x)
if self.embedding_size != self.hidden_size:
x = Dense(units=self.hidden_size,
kernel_initializer=self.initializer,
name='Embedding-Mapping')(x)
layers = None
for i in range(self.num_hidden_layers):
attention_name = 'Encoder-%d-MultiHeadSelfAttention' % (i + 1)
feed_forward_name = 'Encoder-%d-FeedForward' % (i + 1)
x, layers = self.transformer_block(
inputs=x,
sequence_mask=sequence_mask,
attention_mask=self.compute_attention_mask(i, s_in),
attention_name=attention_name,
feed_forward_name=feed_forward_name,
input_layers=layers)
x = self.post_processing(i, x)
if not self.block_sharing:
layers = None
outputs = [x]
</code></pre>
<p>```
I concatenate last 4 layers and put a single linear output for each output head. </p>
<h2><strong>2. data distribution</strong></h2>
<p>Samples-ratio of non-zero with 256 stride vs zero with stride 128 is 1:4.</p>
<h2><strong>3. Training</strong></h2>
<ol>
<li>UseRadam with warmup 0.05 and train 1 epoch</li>
<li>set different weights to match the distribution of dev set (I use 2 dev set for the 135000th to 140000th and for the 302373 to end section). As a result my loss is:</li>
</ol>
<p>*Total_loss = loss_weights1*sample_weights*start_loss+ loss_weights2*sample_weights*end_loss+ loss_weights3*sample_weights*answertype_loss*</p>
<p>loss weights for [start, end, answer_type] is 1:1: (1/sampleweight.mean())</p>
<h2><strong>4. Threshold killing False Positive</strong></h2>
<p>The result of my solution is: CV: 0.478 because there are too many False Negative samples after I fix my metric error 10 days ago. So searching by my 2 devsets I finally choose a safe threshold ( the one slightly smaller than the threshold which reach max CV in order to lower the risk) . If a short answer score less than 0.5 or long answer smaller than 0.1 the answer will be blank.</p>
<p>As I only upload 1 model, I use stride=128 for inference.
My result: CV 0.523, public LB 0.63, private LB 0.65</p>
<h2><strong>5 My question</strong></h2>
<ol>
<li>how to mask padding:
I tried to add a padding mask before embedding layer but which will raise error because layers after does not support mask... So I have to use this wierd ROBERTA architecture.</li>
<li>why my attention mask never works:
Base on time limit for me, when I realise there are some mistake on attention mask because of the code
<code>
sequence_mask = Lambda(lambda x: K.cast(K.greater(x, 0), 'float32'),
name='Sequence-Mask')(x)
</code>
I have no time to test it. So I just replace it by:
<code>
sequence_mask = Lambda(lambda x: K.cast(K.not_equal(x, 1), 'float32'),
name='Sequence-Mask')(x)
</code>
which replace token 1 because 1 represent padding. After TPU training I got 0.53 CV score. But when I plug this to gpu, the public LB only reach 0.48, which is wierd. So my question is: is my replacement of attention_mask really matching what I expect (mask token 1 of value matrix in attention layer)?</li>
</ol> |
TensorFlow 2.0 Question Answering | 1st place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><strong>04/01/2020: Source code is attached below.</strong></h1>
<p>Thanks to the Kaggle and Tensorflow team for holding this competition. I was new to question answering, it took me more than 5 weeks to make my first real submission, and I have learnt a lot during the journey. My initial plan before joining the competition was to learn both QA and TF2.0, but in the end I didn't have time to touch TF2.0, so my solution stays in pure pytorch. Thanks to <a href="/sakami">@sakami</a> for the great kernel <a href="https://www.kaggle.com/sakami/tfqa-pytorch-baseline">https://www.kaggle.com/sakami/tfqa-pytorch-baseline</a>. Your kernel was the starting point of my journey. And of course thanks to huggingface (<a href="https://github.com/huggingface/transformers">https://github.com/huggingface/transformers</a>), NLP finetuning is made much easier. </p>
<p>My solution is described below.</p>
<h3><strong>- Overview</strong></h3>
<p>I trained on the provided candidates instead of sampling from the original documents (examples) as done in the baseline paper (<a href="https://arxiv.org/abs/1901.08634">https://arxiv.org/abs/1901.08634</a>). Since there are a total of 40 million candidates in the training data, for each epoch, I sampled only one negative candidate from each document. For more efficient training, hard negative sampling was used to replace uniform random sampling. The final submission was an ensemble of five models. </p>
<h3><strong>- Sampling Strategy</strong></h3>
<p>Initially, I tried uniform sampling on the negative candidates, but the result was unsatisfactory. The reason might be that most of the negative candidates are "too easy", the model might only need to learn some "basic" patterns for good candidate-level classification performance. But in the testing stage our actual goal is to predict the most probable positive candidate from each document, and this document-level classification is a more difficult task. So I replaced the uniform sampling by hard negative sampling to increase the difficulty of the candidate-level training, as expected, the performance was greatly improved. To perform hard negative sampling in the following models, I firstly trained a model with uniform sampling, and predicted on the whole training data, and stored the answer probability for each negative candidate. The last step was to normalize the probabilities of negative candidates within documents to form a distribution. For the following model training the negative candidates could be sampled from the probability distribution.</p>
<h3><strong>- New Tokens</strong></h3>
<p>According to the baseline paper, I added html tags as new tokens for better model performance. All the 9 tags from the Data Statistics Section of <a href="https://github.com/google-research-datasets/natural-questions">https://github.com/google-research-datasets/natural-questions</a> was added. For html tags that are not in the 9 added tokens, I replaced them with a unique token in the tokenization dictionary or simply addedanother new token to represent them. I did not have time to try adding paragraph or table number similar to what the baseline paper does.</p>
<h3><strong>- Model Architecture, Training and Evaluation</strong></h3>
<p>Overall, the model architecture was the same as the baseline paper (a 5 class classification branch + 2 span classification branch). The five classes was "no_answer", "long_answer_only", "short_answer", "yes", "no". In my case there was no span prediction for answers without a short answer span because I directly used candidates. The loss update of the span prediction branch was simply ignored if no short answer span exist during training. In testing stage, for each document, I used 1.0-prob(no_answer) as the long answer score (confidence) for each candidate, and the candidate with the highest confidence was chosen to represent the document. Short answer spans were forced to be within the highest score long answer candidate (not sure if this is necessary). I used prob(short_answer)+prob(yes)+prob(no) as the short answer score. The exact class of the short answer was determined by the maximum of the three prob values. For span prediction, the output token-level probabilities were mapped to the word-level (white space tokenized) probabilities for easier ensembling of models with different tokenizers. </p>
<h3><strong>- Models and Results</strong></h3>
<p>My final submission was an ensemble of one Bert-base, two Bert-large (WWM), and two Albert-xxl (v2) models, all uncased. The Bert large and Albert models had been tuned on the SQUAD data before training. Below list their validation performance on the dev set using the code <a href="https://github.com/google-research-datasets/natural-questions/blob/master/nq_eval.py">https://github.com/google-research-datasets/natural-questions/blob/master/nq_eval.py</a>. I did not try to implement the competition metric.</p>
<p> long-best-threshold-f1 short-best-threshold-f1
Bert-base 0.618 0.457
Bert-large 0.679 0.541
Albert-xxl 0.700 0.555
ensemble 0.731 0.582</p>
<h3><strong>- Final LB results</strong></h3>
<p>My best ensemble only achieved 0.66 public LB (0.69 private) performance using the optimized thresholds. At that time I had already lost most of my hope to win. In my last 2-3 submission, I arbitrarily played with the thresholds. One of the submissions scored 0.71 (both public and private LB), and I chose it and won the competition. Unbelievable.</p> |
TensorFlow 2.0 Question Answering | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First I would like to thank Kaggle Team and TensorFlow for wonderful competition, TFRC program for TPU credits, Google Cloud for 300$ credits and <a href="/prokaj">@prokaj</a> for sharing his solution.
It was great experience to work on big real-world high quality dataset, use TensorFlow 2.0 and TPUs first time, run inference in couple of seconds, train with batch_size=128 and finally win Gold Medal.</p>
<p>My solution is single model in TF 2.1 trained on TPU. It is Bert Joint with some tweaks and postprocessing. Here are main differences from Bert Joint:
1) Pretrained model: Whole-Word-Masking Bert Large
2) Tfrecords generated with include_unknowns=0.2 (10 time more examples without answer than in original paper).
3) Trained 1 epoch with batch size 128, lr=5e-5 (4-5 hours on TPU).
4) Use answer type logits:
- If answer_type=1 => yes_no_answer=’NO’
- If answer_type=2 => yes_no_answer=’YES’
- If answer_type=4 => no short answers</p>
<p>5) Get some answers with top_level=False</p>
<p>I did EDA and noticed that if 2 long answer candidates contain short answer and one candidate is top_level and another candidate is not top_level and it starts with "Li" HTML token => about 70% chance that correct candidate is non top_level one.
So I implemented this idea as postprocessing.</p>
<p>6) Linear regression over 9 logits as answer verifier.
9 logits included 5 answer type logits, cls_start_logit, cls_end_logit, start_span_logit, end_span_logit.</p>
<p>P.S.
In my local metric I had long_<em>non</em>_null__threshold = 1, short_non_null_threshold = 1 but for some reason it didn’t have big influence on leaderboard score (comparing to long_non_null_threshold = 2, short_non_null_threshold = 2).</p>
<p>Inference kernel: <a href="https://www.kaggle.com/user189546/tfqa-bert-train-tf2">https://www.kaggle.com/user189546/tfqa-bert-train-tf2</a>
Model weights: <a href="https://www.kaggle.com/user189546/unk0201128w">https://www.kaggle.com/user189546/unk0201128w</a>
Train code: <a href="https://www.kaggle.com/user189546/tfqa-train-code">https://www.kaggle.com/user189546/tfqa-train-code</a>
Tfrecords: <a href="https://www.kaggle.com/user189546/train-tfrecords">https://www.kaggle.com/user189546/train-tfrecords</a></p>
<p>P.S. I reused code from these sources:
1. <a href="https://www.kaggle.com/prokaj/bert-joint-baseline-notebook">https://www.kaggle.com/prokaj/bert-joint-baseline-notebook</a>
2. <a href="https://github.com/google-research/language/tree/master/language/question_answering/bert_joint">https://github.com/google-research/language/tree/master/language/question_answering/bert_joint</a>
3. <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/keras_flowers_gputputpupod_tf2.1.ipynb">https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/keras_flowers_gputputpupod_tf2.1.ipynb</a></p> |
TensorFlow 2.0 Question Answering | 21th place solution, puzzlingly shaking from public LB 3th | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/mikelkl/TF2-QA">Code for Solution</a></p>
<p>Thanks kaggle for holding this wonderful competition. Big thanks to my awesome teammate <a href="https://www.kaggle.com/ewrfcas">@ewrfcas</a> and <a href="https://www.kaggle.com/leolemon214">@leolemon214</a>. It's little pity for dropping from public LB 3th place, expect fetching gold medal at future. </p>
<p><strong>We carefully read other top solutions, and still puzzling for the 4% drop of private LB, can someone help to figure out the reason for this shaking?</strong></p>
<p>Below are valid part of our solution, all the following experiments are mainly performed on offline <strong>dev containing 1600 examples</strong>, and some results have been verified in public LB.</p>
<h2>1. Preprocessing</h2>
<p>| No | Technique | Pros | Cons | Effect |
| ---- | --------------------------------- | ------------------------------------------------------------ | --------------------------------- | ------------------------------------- |
| 1 | TF-IDF paragraph selection | Shorten doc resulting faster inference speed and better accuracy | May loss some context information | - dev f1 +1.8%,<br>- public LB f1 -1% |
| 2 | Sample negative features till 1:1 | Balance pos and neg | Cause longer training time | dev f1 +2.248% |
| 3 | Multi-process preprocessing | Accelerate preprocessing, especially on training data | Require multi-core CPU | xN faster (with N processes) |</p>
<h2>2. Modeling</h2>
<p>| No | Model Architecture | Idea | Performance |
| ---- | -------------------------------------------------- | ------------------------------------------------------------ | -------------------- |
| 1 | Roberta-Large joint with long/short span extractor | 1. Jointly model:<br>- answer type<br>- long span<br>- short span<br>2. Output topk start/end logits/index | dev f1 63.986% |
| 2 | Albert-xxlarge joint with short span extractor | Jointly model:<br>- answer type<br>- short span | def short-f1 69.364% |</p>
<p>All of above model architectures were pretrained on SQuAD dataset by ourselves.</p>
<h2>3. Trick</h2>
<p>| No | Trick | Effect |
| ---- | ------------------------------------------------------------ | ---------------------------------- |
| 1 | If answer_type is yes/no, output yes/no rather than short span | public LB f1 +6% |
| 2 | 1. If answer_type is short, output long span and short span<br>2. If answer_type is long, output long span only<br>3. If answer_type is none, output neither long span nor short span | public LB f1 +8% |
| 3 | Choose the best long/short answer pair from topk * topk kind of long/short answer combinations | dev f1 +0.435% |
| 4 | <code>long_score = summary.long_span_score - summary.long_cls_score - summary.answer_type_logits[0]</code><br><code>short_score = summary.short_span_score - summary.short_cls_score - summary.answer_type_logits[0]</code> | - dev f1 +2.12%<br>- public LB +2% |
| 5 | Increase long [CLS] logits multiplier threshold to increase null long answer | dev long-f1 +3.491% |
| 6 | Decrease short answer_type logits divisor threshold to increase null short answer | dev short-f1 ? |</p>
<h2>4. Ensemble</h2>
<p>| No | Idea | Effect |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | For long answer, We vote long answers of 2 <code>Roberta-Large joint with long/short span extractor</code> models | dev long-f1 +3.341% |
| 2 | For short answer, use step 1 result to locate predicted long answer candidate as input, We vote short answers of 2 <code>Roberta-Large joint with long/short span extractor</code> models and 4 <code>Albert-xxlarge joint with short span extractor</code> models | - dev short-f1 +2.842% <br>- dev f1 67.569%, +2.635% <br>- public LB 71%, +5%<br>- private LB 67% |</p>
<p><a href="https://github.com/mikelkl/TF2-QA">Code for Solution</a></p> |
TensorFlow 2.0 Question Answering | rank68_the_ simplest idea to get the first medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>The whole process of solving this problem in my way:</strong></p>
<p><em><strong>source</strong></em>:fork from bert joint baseline notebook</p>
<p>**<em>problems</em> **(before starting this competition): </p>
<p>1.the first time to deal with QA </p>
<p>2.just have a month to go</p>
<p>3.need to understand source codes in tf2.0 ver(just familiar with pytorch)</p>
<p>4.GPU (just have a 1080ti)</p>
<p><strong>Based on current problems, I came up with some solutions:</strong></p>
<p>1.read tf2.0 source code(necessary) #done</p>
<p>2.fine-tune bert-joint-model (<strong>the simplest solution to get started and indeed understand the whole process</strong>)#done</p>
<p>3.try to convert tf2.0 code to pytorch ver (but have not a better result untill now )#done</p>
<p>4.change the pipeline(Some effort)</p>
<p>In this competition, I think the first and second points are the most important for the first time runners,and the main changes of mine are as follows:</p>
<p>1.FLAGS=DummyObject(skip_nested_contexts=True,
max_position=50,
<strong>max_contexts=130</strong>,
max_query_length=64,
max_seq_length=512,
doc_stride=128,
include_unknowns=0.02,
n_best_size=50,
max_answer_length=60)</p>
<p>2.entry["short_answer_score"] < 7.5:</p>
<p>3.entry["long_answer_score"] < 1.5:</p>
<p><strong>I hope to be of some help to those who have just started a new task with limited time and resources</strong></p>
<p>kernel:<a href="https://www.kaggle.com/vanle73/rank68-the-simplest-idea-to-get-medal?scriptVersionId=26631305">rank68_the_ simplest idea to get medal</a></p> |
TensorFlow 2.0 Question Answering | 23rd place solution: ensemble, rank passage and predict span | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, thanks Kaggle for great challenging competition, and congrats winners.
This is my first time to handle QA task, so I could learn a lot of things from you.</p>
<p>Secondly, thanks you all kagglers those had many discussions and ideas here, especially <a href="/christofhenkel">@christofhenkel</a>, <a href="/boliu0">@boliu0</a>, <a href="/higepon">@higepon</a> and <a href="/kashnitsky">@kashnitsky</a> .</p>
<p>This is my brief solution. I welcome questions and advises.</p>
<h1>My Solution</h1>
<p>Combining 1 ranker model & 4 span prediction ensemble model.</p>
<h2>Whole Prediction</h2>
<p>1) compute passage score for all long answer candidates on test dataset
2) select top 10 score passages for each record
3) feed selected passage into span prediction models
4) get averaged score by each model</p>
<h2>Ranker Model</h2>
<p>One of problems is that NQ dataset has so many candidates for long answer. These include obviously negative passages and takes much time to predict for all.</p>
<p>I used <code>bert-base-uncased</code> pre-trained and construct binary classification model to predict the passage is including long/short answer or not.
It get abount 0.98 recall@10 score on my validation dataset.
It takes about 5 minutes for public test dataset.
Other settings are same as span prediction model.
By this model, I make <code>ranker-selected</code> dataset for train and test data by selecting top 10 score candidate for each record.</p>
<h2>Span Prediction Model</h2>
<p>I used 4 models for ensemble:</p>
<ol>
<li><code>bert-large-uncased-squad1</code> pre-trained + 1 epoch on NQ dataset</li>
<li><code>bert-large-uncased-squad2</code> pre-trained + 1 epoch on NQ dataset</li>
<li><code>spanbert-large-cased-squad2</code> pre-trained + 1 epoch on NQ dataset</li>
<li><code>bert-large-uncased-squad2</code> pre-trained + 1 epoch on <code>ranker-selected</code> NQ dataset</li>
</ol>
<p>All models are bert-joint based model.</p>
<h3>Training</h3>
<p>For 1st~3rd models, I use whole NQ dataset.
In training, as reported in <a href="https://arxiv.org/abs/1909.05286">Frustratingly Easy Natural Question Answering</a>, I use 196 as stride, different down sampling rate for answerable and non-answerable question (each 0.01, 0.04).
Batch size is 32, max learning rate is 3e-5.
For 4th model, I used <code>ranker-selected</code> NQ training dataset and adjust sampling rate to 0.03 for answerable and 0.12 for unanswerable.</p>
<h3>Inference</h3>
<p>I used only <code>ranker-selected</code> test dataset. This makes slight improvement on val score than predicting all candidates, and what is more important, this makes faster prediction.
It takes about 3 minute for each model prediction.
I get 0.65 private LB score by single model, 0.67 private LB score by ensemble model.</p>
<h1>Trials which didn't works for me</h1>
<ul>
<li>using albert, xlnet didn't improve scores. maybe I need more tuning.</li>
<li>Attention over Attention didn't affect positively. But I don't have confidence for implementation.</li>
<li>BERT layer combination on last 2, 4, 8, 12 layers. It slightly improve but pre-trained by squad was better.</li>
<li>kinds of dropout on dense layer.</li>
<li>label smoothing on start and end position.</li>
<li>combine ranker model score into span prediction lead worse result.</li>
<li>dividing short and long span prediction, or only predict short spans get worse result.</li>
<li>kinds of preprocessing
<ul><li>no special token</li>
<li>partly use special token in BERT-joint</li></ul></li>
<li>kinds of postprocessing
<ul><li>use only max context position as score</li>
<li>get all logits score and obtain top k candidates</li></ul></li>
</ul>
<p>Thanks.</p> |
TensorFlow 2.0 Question Answering | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Good evening,</p>
<p>first of all, I'd like to thank Kaggle and the hosts for this awesome challenge! It was really fun to work with TF2.0 and TPUs. What would have taken days on my local 2 x 1080 TI machine just took a couple of hours. For example, the actual training time for my final model (excluding tokenization and post-processing) is just a little more than 2 hours.</p>
<p>Secondly, congrats to all the winners. I locked my submissions a week ago with +0.04 on public but everyone kept improving. Maybe I should have had continued working on this challenged as well.</p>
<p>My solution is just a single TF2.0 model. It uses custom heads and a BERT transformers backbone (large version). For modeling and training I am using the great <a href="https://github.com/huggingface/transformers">transformers</a> library. I think that the <a href="https://github.com/see--/natural-question-answering/blob/master/models.py#L8-L28">following snippet</a> is useful to understand the modeling:
```python
class TFBertForNaturalQuestionAnswering(TFBertPreTrainedModel):
def <strong>init</strong>(self, config, *inputs, **kwargs):
super().<strong>init</strong>(config, *inputs, **kwargs)
self.num_labels = config.num_labels</p>
<pre><code> self.bert = TFBertMainLayer(config, name='bert')
self.initializer = get_initializer(config.initializer_range)
self.qa_outputs = L.Dense(config.num_labels,
kernel_initializer=self.initializer, name='qa_outputs')
self.long_outputs = L.Dense(1, kernel_initializer=self.initializer,
name='long_outputs')
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
sequence_output = outputs[0]
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, -1)
end_logits = tf.squeeze(end_logits, -1)
long_logits = tf.squeeze(self.long_outputs(sequence_output), -1)
return start_logits, end_logits, long_logits
</code></pre>
<p>```</p>
<p>As you can see, the natural question answering task can be treated like SQUAD-2.0 with an additional head for long answers. Note that we just need a single output: The opening tag of the HTML bounding box. I guess most competitors used a similar modeling so I think what made the difference to most other solutions is the sampling.</p>
<p>I changed the empty answer ratio so that it is similar to the full dataset. I.e. roughly as many empty answers as answers with a long answer. I started with a really low empty answer ratio which I got from the <a href="https://arxiv.org/abs/1901.08634"><code>bert-joint</code> paper</a>, but I couldn't reach a good score. I tuned a few hyper parameters but overall I got good results with a wide range of parameters. Adding HTML tags as custom tokens helped a bit. I also tried different start weights and found that:</p>
<p><code>bert-large-uncased</code> (~0.70 LB) < <code>bert-large-uncased-whole-word-masking</code> (~0.72 LB) < <code>bert-large-uncased-whole-word-masking-finetuned-squad</code> (~0.73 LB).</p>
<p>That's about it. Thanks to <a href="/boliu0">@boliu0</a>, <a href="/christofhenkel">@christofhenkel</a> and <a href="/kentaronakanishi">@kentaronakanishi</a> for fixing and providing the metric!</p>
<p>Please refer to my repository for implementation details and instructions to reproduce:
* <a href="https://github.com/see--/natural-question-answering">https://github.com/see--/natural-question-answering</a></p>
<p>You can find the 2nd place kernel and pretrained weights on Kaggle:
* <a href="https://www.kaggle.com/seesee/submit-full">https://www.kaggle.com/seesee/submit-full</a>
* <a href="https://www.kaggle.com/seesee/nq-bert-uncased-68">https://www.kaggle.com/seesee/nq-bert-uncased-68</a></p>
<p>Feel free to ask questions and / or create GitHub issues.</p> |
TensorFlow 2.0 Question Answering | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank the organizers for this interesting competition with a not-so-standard dataset. Question answering is one of the most fun tasks in NLP and it is great to finally see it on Kaggle. Also, thanks for providing GCP credits and TPU quota to the participants. I'm sure it greatly widened the range of models and ideas that were tested in this competition.</p>
<p>My work started as an attempt to reimplement BERT-joint in PyTorch using RoBERTa as the backbone. However, I added quite a few tricks and tweaks along the way and ended up with a model and pipeline significantly different from the original BERT-joint. Here I'm going to describe the main changes.</p>
<h2>Preprocessing</h2>
<p>Instead of sliding a window over the entire Wikipedia article, I processed each top level long answer candidate separately. For each candidate, I either converted it into one training example if its length permitted it or split it into several training examples if the candidate was too long. I also added some of the surrounding context to those candidates that were particularly short.</p>
<p>The above preprocessing resulted in approximately 152k positive and more than 12 million negative (not containing any answer) examples, so I decreased the number of negative examples to 160k by random sampling. I used a kind of hard negative mining strategy by sampling more of those negative examples that have high TF-IDF similarity between the question and the candidate. I also sampled several non-overlapping subsets of negative examples to use for different epochs of training thus increasing the diversity of my training data.</p>
<h2>Model</h2>
<p>My model is just RoBERTa-large (I use the implementation from Transformers library) with a new output layer on top of it. In addition to a token-level span predictor for short answers, I use a binary classifier to determine whether a candidate is a long answer or not. The combination of an answerability classifier and a span predictor is a standard approach for SQUAD2.0 (XLNet, RoBERTa, ALBERT all use it). NQ dataset differs from SQuAD2.0 in that a question can be considered answerable even when the correct short answer span is empty (this happens when a question has a long answer, but no short answer).</p>
<p>For span predictor, I use a trick from XLNet: instead of predicting start and end tokens independently, I first predict the start token, then concatenate its representation from the final encoder layer to representations of all the tokens and pass these concatenated representations as input to the end token predictor. This means that the prediction of the end token is conditioned on the start token, which significantly improves the quality of span prediction. </p>
<p>I did not find a way to include YES/NO answers in my predictions without a decrease in the total score so I chose not to predict such answers.</p>
<p>During inference, I first find the long answer candidate that has the highest answerability score. If this score is above a certain threshold, I choose this candidate as my long answer prediction and predict a short span for this candidate. If this span's score is also above a certain threshold, I choose it as my short answer prediction. I used the official NQ dev set to find the best thresholds for both long and short answers.</p>
<h2>Training hyperparameters</h2>
<p>I used AdamW optimizer with weight decay of 0.01 and a linearly decaying learning rate with warmup for all experiments. I had neither time nor computational resources to try a wide range of hyperparameters so the ones I've chosen can be far from optimal. I got the best results on the dev set with a model trained for 5 epochs with a batch size of 48 and a maximum learning rate of 2e-5. I used this model for one of my final submissions. I also had two other models with good results that I later used for ensembling: one was trained for 3 epochs with a batch size of 24 and a maximum learning rate of 3e-5 and the other was trained for 2 epochs with a batch size of 15 and a maximum learning rate of 3e-5.</p>
<p>Training RoBERTa-large for 1 epoch (312k training examples) takes approximately 4 hours on a single V100 GPU using mixed precision.</p>
<h2>Ensembling</h2>
<p>For my second final submission, I ensembled three models by simply summing their output layer logits. This approach led to a significant improvement on the dev set, but it could not fit in the submission time limit. In order to fix it, I decided to limit the number of long answer candidates per question by taking only the first N candidates (most answers are found in the first few paragraphs anyway). However, when my final models and ensembling code were ready, I only had five hours before the deadline and two submissions left so I did not have a chance to select the maximum value of N that will allow my submission to fit within the time limit. I ended up choosing too small of a value for N which probably harmed the performance of my ensemble. In hindsight, it seems a better approach could be to score all candidates with just one model and then use the other two models only for several candidates that got the highest answerability scores from the first model.</p>
<p>In the end, all three of my main models, as well as the ensemble, got a score of 0.68 on the private test set. Well, at least I got stable results.</p>
<h2>Some ideas that did not quite work</h2>
<ul>
<li><p>While SQuAD2.0 pretraining seemed beneficial in my early experiments, it harmed the performance of my final models so I ended up not using it. I suspect that while changing the output layer architecture I might have introduced some bugs in the SQuAD pretraining code. It explains why many other participants, as well as several papers about the NQ dataset, report improvements from SQuAD pretraining.</p></li>
<li><p>I tried adding one more binary classifier to determine whether a candidate contains a short answer or not, but it did not lead to an improvement on the dev set. Now I can see that my early submission with this additional classifier got a slightly higher score on the private test set than a similar submission without it, so it might have been a useful idea after all. </p></li>
</ul> |
TensorFlow 2.0 Question Answering | Brief summary of 13th place solution (hide the pain Harold) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>At first, congrats to every team that scored higher than we did - since we finished on 13th place, that means all of you finished in the gold zone.</p>
<p>Our best solution consists of 4 bert-based models: 1 distilbert, 2 albert-large, and 1 bert-large WWM. </p>
<p><a href="/kashnitsky">@kashnitsky</a> has trained our best single model that scored around 0.64 on the local dev set (we used official NQ dev set for validation). He used original bert-joint repo and added hacks described in the <a href="https://arxiv.org/abs/1909.05286">paper</a>. He also spent a lot of time trying to make ALBERT models work, but none of them (even xxlarge version) wasn't better than bert-large. </p>
<p><a href="/yaroshevskiy">@yaroshevskiy</a> implemented his own version of bert-joint in pytorch, including all the pre and post processing stuff. His best model is based on ALBERT large pretrained on squad 2.0. Oleg also came up with a trick that one might call "window smoothing". The trick addressed bad predictions of start/end probabilities on the window edges. The idea is that for those start/end logits that are close to the edge of the window we use a linear combination of the current window logits and logits from the neighboring window. This improved the score by around 0.01-0.02.</p>
<p>I implemented my own pytorch model that is different from bert-joint in two aspects:
- Instead of working on arbitrary chunked texts, I work on top of long answer candidates
- start/end logits are predicted jointly by an attention-like layer, and the unrealistic start/end positions (like padding or question tokens) are filled with -inf</p>
<p>Implementation for the start/end module is the following:</p>
<p>```
class StartEndModule(nn.Module):</p>
<pre><code>def __init__(self, input_dim, hidden_dim):
super().__init__()
self.start = nn.Conv1d(input_dim, hidden_dim, kernel_size=1)
self.end = nn.Conv1d(input_dim, hidden_dim, kernel_size=1)
def forward(self, hidden, text_mask):
start = self.start(hidden).unsqueeze(3)
end = self.end(hidden).unsqueeze(2)
logits = (start * end).sum(dim=1)
triu_mask = torch.triu(logits, diagonal=1) == 0
text_mask = ((text_mask.unsqueeze(2) * text_mask.unsqueeze(1)) &lt; 0.5)
mask = text_mask | triu_mask
mask[:, 0, 0] = False
logits.masked_fill_(mask, float("-inf"))
return logits, mask.float()
</code></pre>
<p>```</p>
<p>Thus, logits is a square matrix where each entry is a score for a particular start/end pair. In order to find the best-scoring span, one just needs to compute an argmax over those scores.</p>
<p>I also applied <a href="https://arxiv.org/abs/1803.05407">SWA</a> to both of my models and got a nice boost in score (around 0.015), while Oleg and Yury reported none to minor improvements from SWA).</p>
<p>In order to speedup inference, we used distilbert model for candidate prescoring. The idea is for all other models except for distilbert we ignore those windows/candidates that received low scores from the distilbert model.</p>
<p>In order to blend our models together, we used a lightgbm boosting tree. For each candidate, we collect the corresponding scores from all the models as well as some meta-features (such as answer length or relative position of this candidate in the document) and the target is to predict if this candidate contains an answer. </p>
<p>Our best blend achieved around 0.67 on the local dev set and 0.68 on the LB.</p> |
Understanding Clouds from Satellite Images | 4th Place Solution: Stabilizing Convergence in Understanding Clouds | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><img src="https://storage.googleapis.com/kaggle-media/competitions/MaxPlanck/Teaser_AnimationwLabels.gif"></p>
<p>First of all, I would like to express my gratitude and appreciation to the following parties for organizing such a great competition:
- <a href="https://www.kaggle.com">Kaggle</a>
- [Max Planck Institute for Meteorology] (<a href="https://www.kaggle.com/MaxPlanckInstitute">https://www.kaggle.com/MaxPlanckInstitute</a>)</p>
<p>Besides, I would like to use this opportunity to thank my fellow kagglers for all the insightful posts in the discussion forum of various competitions. I have also learned a lot of stuffs and gained knowledge by reading from past solutions. There is a good thriving culture of idea sharing and contributions which I have found in every corner of Kaggle and I loved to be part of it.</p>
<h2>The Main Challenge</h2>
<p>The challenge that I have faced initially in this competition is that many models of different architectures tend to overfit easily in early training stage especially for the larger and deeper models, such as SE-ResNext-101 and EfficientNet B5-B7. I have suspected the culprit might be due to the labels given are too noisy and this increases the tendency of model to be overfitted to the noises of training data, as the labels were determined by the union of the areas marked by all annotators. Also, the shape of the label provided is rectangular instead of the exact shape fitted to the boundary of cloud patterns. I understand the reasons behind <a href="https://arxiv.org/pdf/1906.01906.pdf">these decisions made by the competition host</a>, and here goes my whole journey of this competition, which is revolved around stabilizing the convergence in training models.</p>
<h2>Solution Overview</h2>
<p>My solution for this competition is mainly comprised of the followings:</p>
<ul>
<li><p><strong>Pure segmentation models without false positive classifier</strong>
After reaching public LB 0.6752 with segmentation model, I've trained a few classifiers using Resnet34, SE-ResNext-50 and EfficientNet-B4 but the performance is pretty unstable (+/- 0.003 ~ 0.010) in local cross validations of 10 folds. Thus, I discarded the classifiers and decided to stick with segmentation models.</p></li>
<li><p><strong>Network Architectures</strong>
I've used the awesome implementations of various models from <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a>, <a href="https://github.com/Cadene/pretrained-models.pytorch">pretrained-models.pytorch
</a>, <a href="https://github.com/lukemelas/EfficientNet-PyTorch">EfficientNet-PyTorch</a> and <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion/115787#671393">Resnet34-ASPP</a> from <a href="/hengck23">@hengck23</a>. My final ensemble used 7 folds of EfficientNet-B4-FPN and 3 folds of Resnet34-ASPP as they have better performance and more stable in error convergence in my case after running rounds of experiments using various network architectures.</p></li>
<li><strong>RAdam Optimizer</strong>
RAdam helped to stabilize training error convergence as it is less sensitive to learning rate change in my case, thus minimizing the variance.</li>
<li><strong>Flat threshold of 0.4 for all classes</strong>
Threshold of 0.4 yielded the highest cross validation DICE score when compared in the range of [0.4, 0.5, 0.6], no further fine-tuning of threshold is done.</li>
<li><strong>Minimum segmentation mask size of 5000 pixels for all classes</strong>
The mask size threshold is set to be just high enough to filter out noises, no any other post-processing methods is used.</li>
<li><strong>Input Size</strong>
Downsized from the raw size of 1400 x 2100 to 700 x 1050. After applying augmentations, it is downsized again from 700 x 1050 to 384 x 576.</li>
<li><strong>Augmentations used in training</strong>
<ul><li>horizontal flip</li>
<li>vertical flip</li>
<li>random shift, scale and rotate</li></ul></li>
<li><p><strong>Test-time Augmentations (TTA)</strong>:</p>
<ul><li>horizontal flip</li>
<li>vertical flip</li>
<li>180 degree flip (horizontal + vertical flip)</li></ul></li>
<li><p><strong>Pseudo-labeling</strong>
I've used two approach for pseudo-labeling, one in which only the confident pseudo-labels are selected and use in training, another in which pseudo-labels are generated from all the test data. In my case, the model training performance of using pseudo-labels from all test data is more robust and stable in terms of error convergence and achieve higher DICE score.</p></li>
<li><strong>Ensemble with equal weight averaging</strong> </li>
<li><strong>Trained initially with BCE Loss, fine-tuned with Symmetric Lovasz Loss originated from this <a href="https://arxiv.org/abs/1705.08790">paper</a> and modified by <a href="/tugstugi">@tugstugi</a></strong>
Below is the PyTorch implementation code of Symmetric Lovasz Loss:
<code>
def symmetric_lovasz_loss(outputs, targets):
batch_size, num_class, H, W = outputs.shape
outputs = outputs.contiguous().view(-1, H, W)
targets = targets.contiguous().view(-1, H, W)
return (lovasz_hinge(outputs, targets)
<ul><li>lovasz_hinge(-outputs, 1 - targets))/2
</code></li></ul>
<li><strong>GPU used</strong>
<ul><li>2 x RTX2080Ti</li></ul></li>
<h2>Conclusion</h2>
<p>I think local cross validation is very important and we should always believe in it despite the score showed on Public LB might be lower or higher as it is only computed based on a minor subset of the test dataset. Besides, the <strong>combination of RAdam optimizer, Symmetric Lovasz Loss, Pseudo-labeling and ensembling</strong> has helped significantly in stabilizing the convergence and improving the score.
<br><br>
Thanks for reading! See you again in upcoming competitions.</p> |
TensorFlow 2.0 Question Answering | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The final submission was a single BERT based model. It gave .71 on public data and 0.69 on private leaderboard. Looking at other solutions it was a little bit overcomplicated. </p>
<h2>Preprocessing</h2>
<p>I left out the special tokens introduced in the a baseline script (<code>[ContextId=..][Paragraph=0]</code> etc). Instead I kept the simplified html tags (table tags eg. contained <code>colspan</code> info which I removed). I also added <code>&lt;*&gt;</code>, <code>&lt;/*&gt;</code> at the beginning and the end of each segment. I kept 4 % of the negative examples, and also kept the very long answers that were not contained within one segment. I also processed the entire document text, so the <code>max_contexts</code> argument of the original script was ignored. </p>
<h2>Model output</h2>
<p>Similarly to the baseline I used the classification head, and one head for span start and end logits. With masking this used to get both the long answer, short answer logits.
I also added ''cross'' head, which is a bilinear function of the pairs of the sequence output of the BERT model. Short span logits then obtained as the sum of the start and end logits and the corresponding output of the cross head.
Impossible spans were masked out and <code>softmax</code> gave the span probabilities. For the long span cross entropy criterion was used both for start and end logits. For the short spans the error was negative log of the total probability of positive short spans. These error terms were computed only for examples having long, short answers. So the aim here is to learn the position given that there is an answer, the probability of having an answer came from the <code>answer_type</code> output.</p>
<h2>Postprocessing</h2>
<p>For each segment the long and short spans with maximal probability was computed. From the answer type head the probabilities of having a short or long answer in the segment were computed and these probabilities were assigned to the most likely spans within the segment. These votes were maximized over all segments containing the given span. Then the spans with highest overall scores was considered for the answer. Thresholds were computed using the development data of the NQ dataset. </p>
<h2>Training</h2>
<p>I trained on tpu for 2 epochs using learning rate 2.5e-5 and batch size 64. Before training on nq data, I fine tuned the BERT model on squad 2.0 dataset with the same setting and preprocessing.</p>
<h2>Code</h2>
<p>The final submission was produced with <br>
<a href="https://www.kaggle.com/prokaj/fork-of-baseline-html-tokens-v5">https://www.kaggle.com/prokaj/fork-of-baseline-html-tokens-v5</a></p>
<p>Pre and post processing code <br>
<a href="https://www.kaggle.com/prokaj/bert-baseline-pre-and-post-process">https://www.kaggle.com/prokaj/bert-baseline-pre-and-post-process</a></p>
<p>final model in saved model format <br>
<a href="https://www.kaggle.com/prokaj/tpu-2020-01-22">https://www.kaggle.com/prokaj/tpu-2020-01-22</a></p>
<p>model code (used on tpu) <br>
<a href="https://www.kaggle.com/prokaj/tpu-code">https://www.kaggle.com/prokaj/tpu-code</a></p>
<p>BERT implementation from official tensorflow models (preinstalled on TPU)
<a href="https://github.com/tensorflow/models/tree/master/official">https://github.com/tensorflow/models/tree/master/official</a></p> |
TensorFlow 2.0 Question Answering | 47th Place Solution Write-Up (Ensembling) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Kaggle and Kaggle community for this awesome competition. I learned a lot.
We tried a lot of new things with pytorch in the last week but it weren't able to get things working, but it has been really fun.</p>
<h2><strong>Our Solution</strong></h2>
<p>Our current solution is a TF 2.0 solution based on this great kernel <a href="https://www.kaggle.com/yihdarshieh/inference-use-hugging-face-models">https://www.kaggle.com/yihdarshieh/inference-use-hugging-face-models</a> by <a href="/yihdarshieh">@yihdarshieh</a>. Initially I started off with finetuning the official bert joint baseline but it didn't give much improvements. In next few weeks I completed the TPU setup for the contest and had my pipeline ready for training and validation on dev set.</p>
<p>All the experiments were done in Google colabs free tier TPUs. This is my first time seriously using TPUs and I have to say, it feels so good. Because they are so fast. Using TPUs dramatically reduced experimentation time in my case.</p>
<p>Our solution is a simple ensemble of following 2 models:</p>
<ol>
<li>BERT-joint-large public lb score 0.6</li>
<li>BERT-joint-base public lb score 0.58</li>
<li>DistillBERT-joint public lb score 0.54</li>
<li>Our final solution is an ensemble of 1 and 2 with some additions to postprocessing which scores 0.64 on public lb (also scored 0.64 on private LB but we didn't choose our best solution for final submission as it scored less on public LB with the new postprocessing)</li>
</ol>
<p>Here's our ensembling code, we use simple weighted average ensembling.</p>
<p>```
nq_logits = bert_nq(nq_inputs, training=False)
base_nq_logits = base_bert_nq(nq_inputs, training=False)</p>
<pre><code> (start_pos_logits, end_pos_logits, answer_type_logits) = nq_logits
(base_start_pos_logits, base_end_pos_logits, baseanswer_type_logits) = base_nq_logits
start_pos_logits = (0.2 * start_pos_logits + 0.8 * base_start_pos_logits)
end_pos_logits = (0.2 * end_pos_logits + 0.8 * base_end_pos_logits)
answer_type_logits = (0.2 * answer_type_logits + 0.8 * baseanswer_type_logits)
</code></pre>
<p>```</p>
<h2><strong>Postprocessing</strong></h2>
<p>After experimenting with multiple single models I started focusing on postprocessing to improve on model performance. Initially I used postprocessing provided by this great kernel <a href="https://www.kaggle.com/prokaj/bert-joint-baseline-notebook">https://www.kaggle.com/prokaj/bert-joint-baseline-notebook</a> by @prvi which helped my single models score in range 0.56-0.58. To improve further on this I had an in-depth look at the predictions of the model and ground truths. Here I found that our model was predicting duplicate answer spans. So I added a duplicate removal logic to postprocessing which helped a score increase on 0.01 on public LB and 0.02 on dev set. I also observed a score improvement if my model doesn't predict any "YES/NO" answer so essentially my model was only outputting answer spans and null answers in my final solution.</p>
<p>Deciding on thresholds was one of the important things to predict valid answers but I didn't play too much with answer thresholds. Initially I ran inference on validation set with 5 different answer thresholds [1.5, 3.0, 4.5, 6.0, 7.5] and saw the best validation score with a combination of 1.5 for long answer and 3.0 for short answer which I kept same for final solution.</p>
<h2><strong>New Addition to postprocessing</strong></h2>
<p>Postprocessing from @prvi's kernel only looks at the current <code>512</code> sequence to choose start and end indexes but as we are using strides of length <code>128</code> there's a overlap in every 2 consecutive sequences. So I decided to append top <code>k</code> start and end indexes from every 2 consecutive sequences to choose a pair of start and end indexes. In this case for overlapping start and end indexes I got 4 possible score values out of which I kept only the max score value and discarded the rest 3. This postprocessing didn't give us improvement on public LB for our current best model but it helped by a increment of 0.17 for a weak model.</p>
<p>Our final solution didn't use the new postprocessing but our best solution on private set scored 0.64 (a 0.01 increment) with new postprocessing.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F887695%2Fae6bdb038215e5da57e9b68cc6de5242%2Fscore.png?generation=1579758032774744&alt=media" alt=""></p>
<h2>** Last Week of Competition**</h2>
<p>In the last week of the competition I got an opportunity to team up with <a href="/abhishek">@abhishek</a> and <a href="/rinnqd">@rinnqd</a>. After working on BERT-large and BERT-base we wanted to try out ALBERT and RoBERTa in last few days so we started working on pytorch for these two models. Our GPU training pipeline was completed by <a href="/abhishek">@abhishek</a> in just few hours. We tried to port it to TPU in next few days but weren't able to get it working in time. It was a great experience teaming up with these guys I learned a lot about how to easily prototype your training pipeline. How to start off with validation pipeline and how important it is. Learnt about how to write TPU code for pytorch. </p>
<p>Thanks guys for teaming up. And thanks Kaggle for such a great contest.</p>
<p>Lastly, here's our final solution kernel <a href="https://www.kaggle.com/axel81/inference-use-hugging-face-postprocess">https://www.kaggle.com/axel81/inference-use-hugging-face-postprocess</a>.</p>
<p>Happy Kaggling :)</p> |
ASHRAE - Great Energy Predictor III | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Finally nice competition! Not easy due to really noisy data but new lessons learnt again. Some insights of my solution that reached place #9 and gold.</p>
<p>One of my objectives was to survive to shake-up. As soon as leaked data were discovered I decided to use it mainly for hold-out validation. I started training without any leak data, playing with features engineering, different models, different time CV folds and drove my work only with <strong>correlated CV + hold-out + LB results</strong>. It was a bit frustrating to have very low ranking with this strategy but I knew it should help at the end. On the 2 last week I included 2017 leak data in training and kept 2018 leak data for hold-out.</p>
<p>My solution is <strong>ensemble (ridge regression) of several models</strong>:
- LightGBM (x7)
- CatBoost (x4)
- Neural Network with <a href="https://www.kaggle.com/isaienkov/keras-nn-with-embeddings-for-cat-features-1-15#685556">categories embedding</a> and features standard normalization (x4)
- LiteMORT (x1)</p>
<p><strong>Features are quite simple</strong>, no model above 15 features, 12 as average:
- <code>building_id</code>, <code>meter</code>, <code>site_id</code>, <code>primary_use</code>, <code>week_day</code>, <code>is_holiday</code>
- <code>square_feet</code>, <code>cloud coverage</code>, <code>precip_depth_1_h</code>
- <a href="https://github.com/malexer/meteocalc">feels like</a> temperature, building age, <code>square_feet</code> * <code>floor_count</code>
- <code>air_temperature</code> roll mean 24h, <code>sea_level_pressure</code> trend roll mean 24h
- <code>air_temperature</code> <a href="https://www.investopedia.com/terms/c/colddegreeday.asp">cooling degree</a> per day, <code>meter_reading</code> median per building per meter per year</p>
<p>Notice that some of my models are not using building_id to try to generalize better.
A few models are per meter, others not.
For each model CV I applied different time split, x3, x4 and x6 .</p>
<p><strong>Cleansing and imputation was important too</strong>. For weather data, I tried to find different external sources to fill gaps but it did not give any boost mainly because we're not 100% sure of location of <code>site_id</code> and some data such as cloud coverage was not consistent with the ones in training data. Finally, I trained an additional simple LightGBM model to impute missing data based on provided data (similar to this <a href="https://www.kaggle.com/frednavruzov/nan-restoration-techniques-for-weather-data">kernel</a>). For meter reading, cleansing was not obvious, removing zero patterns (electricity, hot water in summer ...) looked a good idea but one can notice that such pattern also appear in 2017/2018 leaked data. Some buildings could also have some renovations slots that would explain zero patterns. So each of my model had a zero-pattern drop strategy different (<code>site_id</code> = 0 before May 2016 only, full drop, partial drop based on duration and/or on season).</p>
<p><strong>Post-processing</strong>: None and it was a mistake when I see top solutions.</p>
<p>For final submission I selected the ones with best CV/Hold-Out/LB correlation and it was a good choice as they're the best score in Private LB!</p>
<p><strong>What did not work:</strong>
- Too many features lead to overfit especially with target encoding.
- Tree-based second level model for ensembling (overfit again)
- External data for weather (not in inline with provided data)
- Non time-split CV (auto-correlation)</p>
<p>Thanks to organizers, Kaggle and competitors for this challenge! </p> |
TensorFlow 2.0 Question Answering | 4th place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and the hosts for this competition. It was my first time to participate a question answering competition. I'm happy I learned a lot by doing researches in this area.</p>
<p>Here is my solution:</p>
<h1>preprocessing</h1>
<ul>
<li>No preprocessing for Text.</li>
<li>Different Negative sampling rate. Tried 0.02, 0.04 and 0.06.</li>
</ul>
<h1>Data Aug</h1>
<ul>
<li>TTA. not work</li>
<li>Change the answer by replacing it with similar questions' answer. not work</li>
<li>Transform from other question answer datasets like squad and hotpotQA. not work</li>
</ul>
<h1>Models</h1>
<ul>
<li>Tried XLNet, Bert Large Uncased/Cased, SpanBert Cased, Bert Large WWM.</li>
<li>Same loss function and prediction as Bert-joint script.</li>
</ul>
<p>All cased models Perform worse than its Uncased Version. Maybe there are something wrong in my data preparation script. WWM BERT Large Uncased performed best in my experiments.</p>
<h1>Knowledge Distillation</h1>
<p>I believe knowledge distillation is the key part in my solution.
* Trained a combined Bert-large model, by adding bert-large weight and wwm-bert-large weight like 0.8 * wwm-bert-large + 0.2 * bert-large. 1 step, 3e-5 lr.
* Freeze bert layer only finetune classifier weights. 2 step, 1e-5 lr.
* Treat the first model as a teacher model and do knowledge distillation to get a student model.
* Finetune student model with only classifer weights. 3 step 1e-5 lr.</p>
<h1>Validation</h1>
<p>The student model achieves 0.7117955439056357 on dev set. and 0.7 on both public lb and private lb.</p>
<h1>Other works</h1>
<p>I failed to implement Adversarial Training using Estimator framework in TF1.0. But it worth trying if you are using pytorch.</p> |
TensorFlow 2.0 Question Answering | 45th solution, my journey and learnings, feeling grateful :) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I woke up at 4am today, as most days in the last weeks of this competition. The last time I checked yesterday I was 54th on the public LB, and I really counted on a medal and progressing to Kaggle Expert with this competition. I did my daily quick workout and finally opened up Kaggle to see that I moved a few places up and stayed in the silver zone. I smiled, relaxed, and decided to write up my learnings immediately. </p>
<p>My solution is very simple:
1. Started with the bert-joint kernel by prvi (thank you @prokaj)
2. Trained a new bert-joint model, starting with bert-large-uncased (I believe the whole-word-masking gave it a 1-2 points boost over the available bert-joint checkpoint), finetuning 1 epoch on SQUAD, then finetuning 1 epoch (lr=3e-5) on the NQ dataset. I tried several other settings, with lr between 1e5 and 3e5, 1-3 epochs, but the initial setting worked best.
3. I did extensive validation on the NQ dev set, compared the model outputs with ground truth (using a modified version of NQ browser), and used those insights to set the post-processing thresholds. </p>
<p>Things I wished to do - I spent quite some time trying to put another layer (bi-LSTM) over the output features of bert-joint, to learn the post-processing rules rather than setting them by hand. In the end, this turned out to take more time that I could afford between work and family, so I dropped the idea. I’m looking forward to see the winning solutions, see if they implemented this and learn from them. </p>
<p>Before I share my learnings, some context. I’ve been working in IT for 15 years, doing various roles across project and product management, operations and consulting, but no coding / data science work. I initially got interested in ML 2 years ago with Andrew Ng courses (signed up to Kaggle for the first time then), but haven’t really done anything practical until a few months ago when I discovered fast.ai. I did the part 1 of fast.ai Deep Learning course and came to Kaggle to practice the skills. Thank you @jhoward for the learnings and the motivation!</p>
<p>My learnings:
1. It’s ok to be overwhelmed. I said on many nights to my wife that this thing is too difficult for me… then I woke up in the morning, reviewed each line of code to understand the inputs/outputs, analyzed the errors, and came up with a solution.
2. A little time every day is better than nothing. I have a full time job, wife and 2.5 year old daughter… I started doing Kaggle in the evenings, once my girls went to sleep, but after trying to get my daughter to sleep for 1-2 hours I had no energy left for coding… then I switched to going to sleep early and waking up early, and with 1-2 hours per day I felt like I can learn and make progress.
3. The ML/Kaggle community is amazing! My go-to places for learning are the fast.ai forums, Kaggle discussion and ML Twitter. It’s amazing how open this community is, how much learning and sharing is going on. Thank you!!!</p>
<p>With this, I’d like to express my gratitude to Kaggle and Google for organizing this competition and providing the TPU credits. Thank you to the Kaggle community (especially @prokaj, @kashitsky, @christofhenkel, @yihdarshieh) for sharing your code and insights, it’s amazing to be able to learn from so many talented people. And congratulations to the winners, medalists, and everyone that learned something during this competition!</p>
<p>Last thing - I’ve done only solo competitions so far, but I’m looking forward to find partners for future competitions. If you’d like to team up in the future, please connect with me at [email protected] :) </p> |
ASHRAE - Great Energy Predictor III | 15th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Main difficulties in this competition:
1) Less submissions
2) Large amount of data - not memory friendly
3) Data leak dropped the results of the competition.</p>
<p>Step 1:Self Examination
1) dividing the training set
According to the time series relationships between the test set and the training set in the competition --- train set: 2016, test set: 2017, 2018 --- I divided the training set twice:
train:first 6 months, val: last 6 months
train:first 8 months, val: last 4 months</p>
<p>Step 2:Fast data import and data size reduction
1) In this competition, participants often use ‘feather’ format to read data, getting noticeable fast importations.
2) The effort to reduce memory exists throughout the code. The two main ideas are:
a) Allocate memory according to data size requirements. Do not use 16 bits if you can use 8 bits.
b) Try to ensure that only the data required for the current calculation is retained in the memory. Do not load unnecessary data into the memory. After the calculation is completed, immediately release the memory.</p>
<p>Step 3:Using leak data
The leaked data is essentially equivalent to a part of the test set, so it can be used as a test set.
1) Direct training: increasing the amount of training data
2) Statistics, discovering uncontrollable situations such as power outages, abnormal weather (because it is time series data)
3) Used as a test set to evaluate the current model
4) Used to determine some hyperparameters for model fusion
5) Submit as a result</p>
<p>Technique Summary:
We used 19 features in this competition:</p>
<p>Categorical features: 'weekend', 'hour', 'meter', "site_id"</p>
<p>Numerical features: "building_id", "primary_use","square_feet", "year_built", "air_temperature", "cloud_coverage",
"dew_temperature", 'building_mean', 'site_mean', "wind_direction",
"wind_speed", "precip_depth_1_hr", 'building_square_mean', 'floor_count', 'site_hour'</p>
<p>building_mean: Count the total number of each group after grouping according to building_id (not sum).</p>
<p>Site_mean: Concating site_id, meter, primary_use and then counting the total count of each group after grouping.</p>
<p>wind_direction: Convert wind direction to ‘cos’, otherwise it will have an adverse effect on the result, because 0 and 360 coexist. </p>
<p>Building_square_mean: After grouping by building_id, we get the log of the energy consumption of each building (the number is too large). Then we divided that by square_feet to get a ranking of the index of similar buildings' energy consumption capabilities.</p>
<p>Site_hour:concat site_id and hour. With such grouping, we get the ‘log’ for the sum of energy consumption (number too large). We get ranking of the index of regional temperature’s impact on energy consumption.</p>
<p>From all given features, remove sea_level_pressure.Based on experimental results, make np.log1p transformations on some features, mainly square_feet.</p>
<p>Model:lightGBM</p>
<p>Prameter:</p>
<p>params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'rmse',
'learning_rate': 0.1,
'num_leaves': 2**8,
'feature_fraction': 0.9,
'bagging_fraction': 0.9,
'subsample_freq': 5,
"reg_lambda": 2,
'reg_alpha': 1,
'seed': 0,
'early_stopping_rounds': 20
}</p>
<p>Kfold:group - first / second half of year;explanation:The temperature distribution is similar and relatively balanced. Later, someone tested the stratified k-fold and got good results:
The key of using k-fold is to ensure the similarity of the data. The two are too similar and boring. If they are too different, it does not matter at all, so try to grasp</p>
<p>Data:
Training set used all the leaked data
1)Discard site15 for its low data accuracy
2) selectively discards some data from site4 based on calculated distance from 2016</p>
<p>Discard anomalies:
1) We delete all data of site0 under meter = 0 before 2016-05-20 (from a data analysis perspective, these are all noise)
2) We delete all cases where the meter reads 0 because we don't think a building will have the meter moving.
3) We delete the case where meter is huge in building_id = 1099. At the end, we still feel that there are a lot of noise points in the data, because removing many zero-value points will get very good results <0.8, but overfitting is more serious because there are more zero-value points</p>
<p>note:
Due to the unit inconsistency problem at site0, we converted units. The fact is that its effect is extremely limited (difficult to observe), but it has a limited effect when using kfold, so we keep it.</p> |
2019 Data Science Bowl | 10th place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, thanks kaggle team for a exciting competition, and congratulations to all winning teams and good result teams.
I joined this competition solely, so it was hard but very interesting competition.</p>
<p>1 year ago, I have experienced very big shake down(2th => about 1500th) at Microsoft Malware Competition.
<a href="https://www.kaggle.com/c/microsoft-malware-prediction/discussion/83950">https://www.kaggle.com/c/microsoft-malware-prediction/discussion/83950</a></p>
<p>From this experience, I made effort to validation strategy and public/private analysis.
As a result I got my first Gold Medal by shake up.</p>
<p>Here is my solution. (I am sorry for my poor English.)</p>
<h2>Results</h2>
<p>10th(solo Gold) / 3523</p>
<h2>Model</h2>
<ul>
<li>LightGBM x 6 average
<ul><li>CV seed snd some hyper parameters is changed per model</li>
<li>After averaging regression value, transform to integer accuracy_group by threshold.</li></ul></li>
</ul>
<h2>Validation</h2>
<ul>
<li>Stratified Group KFold 10fold</li>
<li>All Validation score is calculated by truncated validation.
<ul><li>random sample assessment each installation_id</li></ul></li>
<li>Each fold, I use 51 truncated validation set.
<ul><li>1 set is used for early stopping</li>
<li>50 set is used for validation score by averaging qwk.</li></ul></li>
</ul>
<h2>Public vs Private</h2>
<p>I think public dataset is not good distribution for validation because there are only 1000 records.
I calculated by 1000 times the average of train oof prediction which is truncated and randomly sampling 1000 rows.
The histgram is as follows.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1305588%2F03a952505303120659eb4ec830ad0ab0%2Ftruncated_oof_prediction_average.png?generation=1579774962577581&alt=media" alt=""></p>
<p>From this histgram, It seems that the public dataset is rare case.</p>
<p>Therefore I trust CV (ignore LB) and use threshold of CV best (explain detail in next).</p>
<h2>QWK threshold</h2>
<p>Finaly I used constant threshold [1.04, 1.76, 2.18].
This threshold is calculated by OptimizedRounder's threshold average in 500 truncated oof validation.</p>
<p>I tryed many methods, but I believed maximum threshold in local CV prediction is most reliable.</p>
<p>Some public kernels decided threshold by target distribution.
In my experiment, the method is good for public LB than other methods, but I think this method is overfitting to Public LB because the distribution is not equal to the truncated target distribution and not best distribution for QWK.</p>
<h2>Feature</h2>
<p>I made 3000~5000 features overall, but I think there are no magic features.
(Finaly I used about 300 features.)</p>
<p>Good features for me is as follows.</p>
<ul>
<li>Normalized Accuracy feature
<ul><li>I normalized accuracy features because the difficulty of assessments and games are different per title.</li>
<li>(Accuracy - Accuracy_mean_per_title) / Feature_std_per_title</li>
<li>accuracy features means accuracy_group, n_true_attempts/all_attempts, correct/event_num and corret/(correct+false) etc...</li></ul></li>
<li>Feature per title
<ul><li>I make feature per title because the difference of level in a game is difficult to find common columns in event data.</li>
<li>Ex : target_distances length in Air Show</li>
<li>However it spends a lot of time, so I make only abount 10 titles(game, assessment) and gave up...</li></ul></li>
<li>Relative feature
<ul><li>Ex: event_code: 4020_count / 4070_count, last_accuracy / all_accuracy_mean</li></ul></li>
</ul>
<h2>Feature Selection</h2>
<p>To evaluate features effect in truncated validation, I use LGB feature importance by truncated training data.
In each fold, I make 50 truncated datasets, and change dataset per 5 iteration by using lightgbm init_model params.</p>
<p>I use top 300 features (the number is feeling).</p>
<h2>Others</h2>
<ul>
<li>LightGBM parameter feature fraction => 1.0
<ul><li>Feature fraction change (0.8 => 1.0) make improvement my CV about 0.005</li>
<li>I think that the model should use assessment_title for every tree because the title has big effect on the target value, and the role of other features is change by the title feature.
(It is hypothesis. I don't know it is correct)</li></ul></li>
<li>model per game session
<ul><li>Ubove Transformer model, I make lightgbm model per game session (predict next assessment result).</li>
<li>The model is not used for the main model, but it is useful for find good feature in game eventdata speedy.</li></ul></li>
<li>Use test dataset for training
<ul><li>I don't know it made improvement.</li></ul></li>
</ul>
<h2>Not Work</h2>
<ul>
<li>NN regressor (MLP)
<ul><li>Though NN sometimes has good score, but not stable.</li>
<li>I have no time to tuning.</li></ul></li>
<li>NN EventCode Transformer
<ul><li>I regard one session as one sentence, and event codes as words.</li>
<li>Prediction next assessment per session, and use it as feature</li>
<li>A little improvement but consuming long time, so I do not use it.</li></ul></li>
<li>Word2Vec Feature
<ul><li>Similar to Transformer, I regard one session as one sentense.</li>
<li>No improvement.</li></ul></li>
<li>Predicting normalized accuracy group
<ul><li>No improvement.</li></ul></li>
<li>Training redidual error per title
<ul><li>No improvement</li></ul></li>
</ul> |
2019 Data Science Bowl | 20th place solution 😂 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners!
Here is brief summary of our solution.</p>
<h3>Feature Engineering</h3>
<ul>
<li>worked
<ul><li>is 1st assessment or not</li>
<li>Normalized counting feature: count for each event codes and ids / game session duration</li>
<li>Whether assessment is solved in order of the game design, or not</li>
<li>aggregations
<ul><li>durations for each type (mean, std, min)</li>
<li>the number of records for each type (mean, std, min)</li></ul></li></ul></li>
<li>not worked
<ul><li>last activity, last game statistics
<ul><li>corrects, incorrects, misses, rounds, levels..</li></ul></li>
<li>last type, title history sequence</li></ul></li>
</ul>
<h3>Models</h3>
<p>Using QWK for tuning models was too difficult, so we decided to evaluate only the RMSE for the model performance.
Group 5 folds is used as validation method. We applied truncation to validation set.
- 1st level
- lgbm: CV 1.0395 +/- 0.031
- objective rmse
- 3 random seed averaging
- xgb: CV 1.0457 +/- 0.028
- objective rmse
- 3 random seed averaging
- catboost: CV 1.0430 +/- 0.028
- objective rmse
- NN: CV 1.0423 +/- 0.029
- rmse + smooth l1 loss
- RNN-layer: GRU + Attention
- sequence of last 6 histories as input
- Dense-layer
- 3 random seed blending
- NN model has the almost same performance as boosting tree models, but has a low correlation.
- 2nd level: PublicLB 0.538 PrivateLB 0.556
- ElasticNet: CV 1.0361 +/- 0.028
In addition, we cloud not include it though, lgbm with accuracy classification is best model in our experiments.</p>
<h3>Thresholding</h3>
<p>The most time was spent on how to determine the threshold. We prepared some ides and experimented.
We did sampling with replacement 10 times from each fold's OOF for each installation_id (we called it as OTV).
Then, we applied following thresholding methods with 5 folds CV of OOF.
- 1. Confirm label distribution of validation set to OOF, OTV true label distribution (many kernel did)
- 2. Apply Optimized Rounder to oof, otv and get thresholds from them, then confirm the validation label distribution to optimized OOF, OTV label distribution.
- 3. Apply Optimized Rounder to OOF, OTV and get thresholds from them, then apply thresholds to validation set.
In our experiments, No. 2 method with otv almost got top score, but sometimes No.3 with OOF did.
So we chose two thresholding methods as final submissions.
Experiment results close to PrivateLB.
The results of the above experiment gave almost the same results as PrivateLB.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F347731%2F71393d4b9490bd66d06d71e03ea90aa1%2Fimage.png?generation=1579958134153508&alt=media" alt="results"></p>
<h3>train dataset augmentation</h3>
<p>The extension of train set by test set was used only for training each 5 folds, and was not used for determining the thresholds or as validation set.</p> |
ASHRAE - Great Energy Predictor III | i-th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>How many <strong><em>i-th</em></strong> place solutions are written and waiting for <strong><em>i</em></strong> calculation? :)</p> |
2019 Data Science Bowl | Single Model - 1.5 Transformers - 31st place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was a very challenging and fun competition. I want to thank the sponsor, Kaggle, and of course all the amazing competitors!</p>
<p>I also want to give a shout out to the Kaggle/Google engineers. I've been incredibly impressed with the website/kernels. I am currently a full stack engineer at Amazon so I know how hard it is to pull that off, so thanks for making such an amazing product!</p>
<h2>Data Preparation</h2>
<p>I did some minor edits to event codes. I broke up 4020, 4010, 4025 into whether correct was true or false, so I ended up with 40201, 40200, 41001, 41000, 40251, 40250.</p>
<p>I transformed the data into "histories". Basically a history is all of the data leading up to the target assessment. I then processed these histories into a large numpy array.</p>
<p>Due to some histories being enormous due to shared devices I decided to take the last X game sessions per target assessment and also the last Y events per game session. This made sense to me as the recent data should be more important and it is an easy way to deal with shared devices.</p>
<p>I found that x = 80 and y = 100 gave the best results, so I ended up with a sparse np array:
(# histories, 80, 100, # features)</p>
<p>I added "blank" categories to the title, event_code, and accuracy group embeddings. This informed the model that these did not exist. (History was shorter then 100 events or shorter than 80 game sessions.) I tried masking the input to the transformer but it killed the performance and the score decreased.</p>
<h2>Features</h2>
<p>You can see the features below in the model diagram. The Assess Target Title and Assess Target Time are fed into every event. I did try inputting these once at the end of the model but the performance got slightly worse. I also tried inserting game session features into the game session embedding but none of the features I tried helped.</p>
<p>Near the end of the competition I played around with adding in the OOF models' predictions and the models' prediction groups. This seemed to help a lot on Local CV but not as much on Test. I think perhaps I was doing something wrong with how I was then creating these values for the Test Assessments.</p>
<h2>Model</h2>
<p>My original idea was to use a double transformer network. One transformer for the events of a game session and then use those outputs to have one transformer take in each game session embedding. This did work but I discovered it was better (and much faster) with the first transformer being a "Zero Head", which means I just removed the attention part and left in the shared FC layers:</p>
<blockquote>
<p>events2 = self.linear2(self.dropout1(F.relu(self.linear1(events))))
events = events + self.dropout2(events2)
events = self.norm2(events)</p>
</blockquote>
<p>One key idea came from NLP where they tie the word embeddings in the input and in the output to improve generalization. I did this with the accuracy group and saw a nice bump in QWK.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1156376%2F5bc8e2e8b2d7374b2027ca4f8d8b2879%2Fdsb%20(1" alt="">.png?generation=1579876333416350&alt=media)</p>
<h2>Noisy Labels</h2>
<p>This was one of the more interesting ideas that I tried. If you consider the fact that 3-5 year olds are incredibly noisy in general you can view the labels as being fairly noisy. I read a bunch of papers on dealing with noisy labels and they all basically dealt with the model (or a second model) learning the noise. I decided to save the OOF predictions and then blend these with the actual targets when training new models hoping the single model would be able to learn about some of the noise patterns. Turned out this was very difficult to tune correctly as it seem to leak into my 5 fold CV and also made the train loss hard to reason about. Basically it made it very easy to overfit to the train data.</p>
<p>I'm also not sure if the way I did the OOF predictions was the best. I would save it after every 5 fold run and then would just average the predictions from all of the past. I think now that this may have increased the confirmation bias and I would have better off with just getting some initial predictions from models that were not blended with new targets and sticking with those.</p>
<p>In the end it did boost my private test score from .550 to .554. My best blend was 85 epochs starting from all actual targets to 50/50 at epoch 50 and then increasing back in the actual targets. The blend was linear changing by 1% every epoch so old/new: 1/0 -> .5/.5 -> .85/.15</p>
<p>Here was the main paper that I got this idea from. It's on pseudo labeling but I think it applies just as well to dealing with noisy labels. I had mixup augmentation on my todo list as I think it would have greatly enhanced the Noisy label technique, but I never got to try it.</p>
<blockquote>
<p>Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning
Eric Arazo, Diego Ortego, Paul Albert, Noel E. O’Connor & Kevin McGuinness
<a href="https://arxiv.org/pdf/1908.02983.pdf">https://arxiv.org/pdf/1908.02983.pdf</a></p>
</blockquote>
<h2>Code</h2>
<p>I'm working on cleaning up the code and hope to make it public soon. Thanks for reading!</p> |
TensorFlow 2.0 Question Answering | 17th Place solution [bert-disjoint] kernel + all utility scripts | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all winners!</p>
<p>I've made available my solution (public 0.65 private 0.67).
<a href="https://www.kaggle.com/siriuself/tf-qa-wwm-verifier-forked">https://www.kaggle.com/siriuself/tf-qa-wwm-verifier-forked</a></p>
<p><a href="https://www.kaggle.com/siriuself/bert-disjoint-fn-builder">https://www.kaggle.com/siriuself/bert-disjoint-fn-builder</a>
<a href="https://www.kaggle.com/siriuself/bert-disjoint-modeling">https://www.kaggle.com/siriuself/bert-disjoint-modeling</a>
<a href="https://www.kaggle.com/siriuself/bert-disjoint-utils">https://www.kaggle.com/siriuself/bert-disjoint-utils</a>
<a href="https://www.kaggle.com/siriuself/albert-yes-no-fn-builder">https://www.kaggle.com/siriuself/albert-yes-no-fn-builder</a>
<a href="https://www.kaggle.com/siriuself/albert-yes-no-modeling">https://www.kaggle.com/siriuself/albert-yes-no-modeling</a>
<a href="https://www.kaggle.com/siriuself/albert-yes-no-utils">https://www.kaggle.com/siriuself/albert-yes-no-utils</a>
<a href="https://www.kaggle.com/siriuself/tokenization">https://www.kaggle.com/siriuself/tokenization</a>
<a href="https://www.kaggle.com/siriuself/create-submission">https://www.kaggle.com/siriuself/create-submission</a></p>
<p>My model is simple, BERT large whole-word-masking uncased, retrained using the start/end logit loss only without the answer type loss. Using provided nq train tf record with the following setting:
<strong>batch_size</strong>: 32
<strong>epoch</strong>: 2
<strong>alpha</strong>: 2e-5, but use ckpt-15000 (so stop at around 1 epoch)
Note that because of learning rate warmup/decay, this is different from training with 2e-5 for 1 epoch</p>
<p>Then totally disregard answer type classification (since I don't have it), and rely on threshold setting for long and short questions, tuned on the dev set with nq_eval. And yes I let go all the YES/NO questions.</p>
<p>I did try adding an ALBERT xxlarge yes/no verifier after the BERT stage, which seemed to improve for like 1 pt on dev, but apparently not on the LBs somehow. My kernel includes the ALBERT part too. There isn't much insight in the utility scripts, except the modifications I made in order to restore the checkpoint into some contrib layers forced to be re-written in keras (e.g. LayerNormalization). It was a nightmare..</p>
<p><em><strong></strong></em><strong>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>*</strong><em>some reflections/insights</em><strong><em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>**</strong>
I started the competition way too late and was on the wrong ALBERT-taking-too-long direction for quite a while, ending up with little time for tuning. I still have a very strong feeling that the "joint" part of bert-joint might be of little use, since we've already known that:
1) BERT-like structure is poor at passage ranking, and to make it better we need passages at least as many as in MS-MARCO
2) We only have like 1-3% YES/NO in our training data. Very unbalanced.
Based on my inspection and verification experiment this might well be the case, which means the answer type classifier might just be reduced to a question type classifier (a much easier task for BERT to see), which would be a bad indicator of what type of answer the passage contains. It might not be a good idea to include it in training in the first place, and it'd be a disaster if you over-rely on the answer type logits for post-processing.</p>
<p>Let me know if you have similar/opposite findings. It's just my feeling anyway, with a bit of confirmation of my score, obtained by doing nothing other than removing the answer type loss in training. This is almost higher than my ALBERT-xxlarge-joint model too.. I almost felt like doing more experiments and writing a paper on the classification power reduction, but guess it's too trivial.</p>
<p>Lastly, we are hiring intermediate/senior NLP engineer/researcher/scientist, with possibility of sponsorship to come to our Canadian headquarter in Waterloo, Ontario for strong candidates. Bilingualism in English and Mandarin is a plus.</p> |
TensorFlow 2.0 Question Answering | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Framework and hardware</h1>
<p>Initially we set out to try both TF 1.15 and TF 2.0. Unfortunately TF 2.0 pipeline's scores are significantly lower (we probably didn't figure out how to correctly use the TF 2.0 API), so all our top submissions and what is described below were done in TF 1.15. </p>
<p>All the experiments were done in Google cloud TPUs. This is my first time seriously using TPUs and I have to say, it feels so good. Because they are so fast and Google is generous enough to give us 5 TPUs, the experiment cycle is dramatically reduced.</p>
<h1>Validation scheme and experiment setup</h1>
<p>We noticed that the evaluation metric is not very stable on smaller validation set, for example better models on dev00 (1600 examples) may not be better on dev01. So we rely solely on the whole dev set (7830 examples) for validation (i.e. selecting checkpoints, selecting models, tuning thresholds, tuning ensemble weights). Same thing goes for the public LB. It only has 346 examples, even less stable than dev00.</p>
<p>Most of our models are based on the official implementation of <a href="https://github.com/google-research/language/tree/master/language/question_answering/bert_joint">joint-bert</a>, which is surprisingly hard to beat. For each variation to joint-bert, we ran 5 training sessions on TPU simultaneously, with different batch size and learning rate. We save checkpoints every 500 or 1000 steps. Usually bs=64 and lr=4e-5 for 1 epoch gives the best scores.</p>
<h1>Variations to official joint-bert</h1>
<p>We tried the following variations to joint-bert. The ideas for many of them are drawn from this IBM paper: <a href="https://arxiv.org/pdf/1909.05286.pdf">https://arxiv.org/pdf/1909.05286.pdf</a>
1) and 2) are the most important. Adding them always help. They contribute to the best single model. 3) to 6) sometimes help, sometimes don't. They depend on each other. Nevertheless, training with these variations did produce many diverse models, which are good for ensembling. </p>
<h2>1) pre-trained weights</h2>
<p>Official joint-bert was trained from "BERT-Large, Uncased", but training from "BERT-Large, Uncased (Whole Word Masking)" will see a big boost.</p>
<p>We also tried fine-tuning joint-bert on Squad 2.0 before fine-tuning it on NQ.</p>
<h2>2) negative sampling</h2>
<p>Official joint-bert samples 2% negative examples in both answerable questions (i.e. the sliding windows that don't contain an answer) and unanswerable questions. As explained in the IBM paper, joint-bert tends to be overconfident for unanswerable questions, so 1% for answerable and 4% unanswerable seem to be better. We saw about 1 point increase in F1 doing negative sampling.</p>
<h2>3) <code>max_seq_length</code>, <code>doc_stride</code></h2>
<p>Default are 512 and 128 respectively. This means for answers not in the beginning of article, they appear about 4 times after pre-processing (in training the example is seen 4 times per epoch, in inference it's predicted 4 times, then the one with max logits is selected), which seems like an overkill.</p>
<p>So we changed doc_stride to 256 during inference, which doesn't affect score much but reduced inference time in half. </p>
<p>For training, we used default 128 as well 192 and 256. IBM paper claims 192 gives best results, but we didn't see much difference.</p>
<h2>4) max_contexts</h2>
<p>Default is 48. There are some very very long wikepedia articles, so joint-bert only take first 48 paragraphs/tables/lists of each article. We tried different values like 100 and 200. Using a bigger value is tradeoff between more answer coverage v.s. more "empty" windows.</p>
<h2>5) sentence order shuffling</h2>
<p>Also proposed in the IBM paper: shuffling all the sentences in the paragraph containing short answers. This is an augmentation method. </p>
<h2>6) cased</h2>
<p><code>do_lower_case=False</code>
For this, we generated a new vocab file by adding all the NQ special tokens into the cased BERT vocab.</p>
<h2>7) Attention-over-attention</h2>
<p>Mentioned in the IBM paper as the most important change, but it didn't work for us.</p>
<h1>Ensemble</h1>
<p>In 3 hours, we can do inference for 3 models with doc_stride=256. Luckily, n=3 happen to be the number of our best ensemble: adding a fourth model does not help anymore. The ensemble strategy is simply averaging the probability of each candidate span.</p>
<p>Our 2 submissions consist of the following 5 single models:
a. wwm, stride=256, dev 62.4
b. wwm, neg sampling, pre-tuned on squad, <strong>dev 64.7</strong> (long 69.5, short 57.8) - best single model
c. wwm, neg sampling, max_contexts=200, dev 64.5
d. wwm, neg sampling, stride=192, dev 63.8
e. wwm, neg sampling, cased, dev 63.3</p>
<p>sub1: ensemble of a,b,c, dev 66.8 (long 71.6, short 59.8), private LB 0.69
sub2: ensemble of c,d,e, <strong>dev 67.0</strong> (long 71.6, short 59.9), private LB 0.69
(note: these scores are after post-processing)</p>
<h1>Post-process</h1>
<h3>yes/no thresholds</h3>
<p>These are tuned on dev set as well. If the yes/no logits in the <code>answer_type_logit</code> are over the thresholds, predict "YES"/"NO" regardless of the short span predictions. This gives 0.5 boost to dev F1.</p>
<h3>max_contexts</h3>
<p>Increase max_contexts from the default 48 to 100 or 80 can squeeze out another 0.3 F1 points, taking advantage of the leftover inference time within the 3 hour limit. For sub1 we did 100; for sub2 we only did 80 because generating features for the cased model <code>e</code> took a little more time.</p>
<h1>Code</h1>
<p>repo: <a href="https://github.com/boliu61/tf2qa">https://github.com/boliu61/tf2qa</a>
inference notebook and model weights: <a href="https://www.kaggle.com/boliu0/7th-place-submission">https://www.kaggle.com/boliu0/7th-place-submission</a></p> |
2019 Data Science Bowl | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to Booz Allen Hamilton and Kaggle team for such an interesting competition. And congratulations to all the winning teams and all the Kagglers who have worked hard and learned a lot throughout this competition. </p>
<p>We ranked 38th in Public and 2nd in Private. These final results excited us and one of our teammates, <a href="/tiginkgo">@tiginkgo</a>, has become a new Kaggle master :)</p>
<h2>Results</h2>
<p>The best model we chose achieved 0.563 for Public and also 0.563 for Private. </p>
<h2>Feature Engineering</h2>
<p><strong>Word2Vec features of title series</strong>
- Considering the series of course titles up to the target assessment as a document, processed them with word2vec and calculated the stats (mean/std/max/min) of the obtained vector.</p>
<p><strong>Historical feature</strong>
- Count of (session, world, types, title, event_id, event_code) as historical data, grouped by (all, treetop, magma, crystal).
- Count, mean, max of (event_round, game_time, event_count).</p>
<p><strong>Decayed historical feature</strong>
- Historical data decayed for (title, type, world, event_id, event_code).
- Decrease accumulation by half for each session.</p>
<p><strong>Density of historical feature</strong>
- The density of historical data for (title, type, world, event_id, event_code).
- Density = (count) / (elapsed days from a first activated day).</p>
<p><strong>Lagged Assessment</strong>
- Lots of stats (mean/std/...) of num_correct, num_incorrect, accuracy, accuracy_group.
- The difference of hours from the past assessment.
- Per full assessments, and per title assessments.</p>
<p><strong>Meta Features</strong>
- In order to denote “How having a game_session in advance can lead to an assessment result”, we created “meta target features” for each assessment title. We used oof for train data and KFold averages for the other data such as records without test or meta target.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1846168%2Ff8052d17308d8f21c32d010382ae1150%2F2020-01-24%200.07.44.png?generation=1579796947915005&alt=media" alt=""></p>
<h2>Feature Selection</h2>
<ul>
<li>Delete duplicate columns.</li>
<li>Delete high-correlated columns (over 0.99).</li>
<li>Finally, fetch top 300 features scored by null importance.</li>
</ul>
<h2>Modeling</h2>
<ul>
<li>For the validation set, we resampled to ensure one sample per one user.</li>
<li>StratifiedGroupKFold, 5-fold.</li>
<li>RSA (5 random seed) of LGB, CB, and NN.</li>
</ul>
<h2>Post Processing</h2>
<ul>
<li>Ensemble = 0.5 * LGB + 0.2 * CB + 0.3 * NN.</li>
<li>Set the threshold to optimize cv qwk.</li>
</ul>
<h2>Special thanks</h2>
<p>The 7th place solution of Elo Merchant Category Recommendation Competition gave us great inspiration, especially for our word2vec and meta features, which were very important parts of our solution.</p>
<p>We are deeply grateful to <a href="/senkin13">@senkin13</a> and his excellent explanations are here:
<a href="https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/82055">https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/82055</a>
<a href="https://www.slideshare.net/JinZhan/kaggle-days-tokyo-jin-zhan-204409794">https://www.slideshare.net/JinZhan/kaggle-days-tokyo-jin-zhan-204409794</a> </p> |
TensorFlow 2.0 Question Answering | 31st solution with custom loss | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Kaggle and Kaggle community for this awesome competition. I learned a lot.
I wasn’t able to do almost anything the last two weeks due to my personal reason, but it has been really fun.</p>
<h2>My model</h2>
<ul>
<li>Public 0.68 Public 0.65</li>
<li>Single PyTorch Bert model</li>
<li>fine-tune bert-large-uncased-whole-word-masking-finetuned-squad for 1 epoch.
<ul><li>2 epochs got better Private 0.68 Public 0.65 but I didn't choose it :(</li></ul></li>
<li>learning rate 3e-5 instead of 5e-5</li>
<li>Down sampled null instance training data.</li>
<li>Penalize training data with answer in stride in loss function.</li>
<li>Simply removed HTML tags</li>
<li>Parameters search using short/long score.</li>
</ul>
<h2>down sampling</h2>
<p><code>
flattened_examples = list(itertools.chain.from_iterable(examples))
null_instances = []
annotated_instances = []
for e in flattened_examples:
if e.class_label == 'unknown':
null_instances.append(e)
else:
annotated_instances.append(e)
len_null = len(null_instances)
len_downsampled = int(len_null / 50) if len_null > 50 else 0
downsampled = random.sample(null_instances, len_downsampled)
logging.info(' down sampling nonnull(%d) null(%d) to null(%d)', len(annotated_instances), len_null, len(downsampled))
self.examples = downsampled + annotated_instanceCan someone share sgse
</code></p>
<h2>loss function</h2>
<p>```
def loss_fn(preds, labels, no_answers):</p>
<pre><code>start_preds, end_preds, class_preds = preds
start_labels, end_labels, class_labels = labels
has_answers = [not x for x in no_answers]
start_preds_no_answer = start_preds[no_answers]
start_preds_has_answer = start_preds[has_answers]
end_preds_no_answer = end_preds[no_answers]
end_preds_has_answer = end_preds[has_answers]
class_preds_no_answer = class_preds[no_answers]
class_preds_has_answer = class_preds[has_answers]
start_labels_no_answer = start_labels[no_answers]
start_labels_has_answer = start_labels[has_answers]
end_labels_no_answer = end_labels[no_answers]
end_labels_has_answer = end_labels[has_answers]
class_labels_no_answer = class_labels[no_answers]
class_labels_has_answer = class_labels[has_answers]
loss_no_answer = 0
loss_has_answer = 0
# has answer
if len(start_preds_has_answer) > 0:
start_loss = nn.CrossEntropyLoss(ignore_index=-1)(start_preds_has_answer, start_labels_has_answer)
end_loss = nn.CrossEntropyLoss(ignore_index=-1)(end_preds_has_answer, end_labels_has_answer)
class_loss = nn.CrossEntropyLoss()(class_preds_has_answer, class_labels_has_answer)
loss_has_answer = start_loss + end_loss + class_loss
if len(start_preds_no_answer) > 0:
start_loss = nn.CrossEntropyLoss(ignore_index=-1)(start_preds_no_answer, start_labels_no_answer)
end_loss = nn.CrossEntropyLoss(ignore_index=-1)(end_preds_no_answer, end_labels_no_answer)
class_loss = nn.CrossEntropyLoss()(class_preds_no_answer, class_labels_no_answer)
loss_no_answer = start_loss + end_loss + class_loss
return loss_has_answer * 2 + loss_no_answer
</code></pre>
<p>```</p>
<h2>What I didn't try</h2>
<ul>
<li>p/table tag annotations</li>
<li>TPU</li>
<li>more post processing</li>
</ul> |
Understanding Clouds from Satellite Images | 240 place with simple model, no kfold, no combining networks | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi guys, </p>
<p>For this competition I created and tested single models without any kfolding or combining multiple networks architectures.
The parameters which I tested were:</p>
<p><strong>Convolutional Networks Arhitectures:</strong>
- Unet
- FPN</p>
<p><strong>Pretrained Networks:</strong>
- resnet18
- resnet34
- resnet50
- resnet101
- resnet152
- se _ resnext50_32x4d
- se _ resnext101_32x4d
- efficientnet-b0
- efficientnet-b1
- efficientnet-b2
- efficientnet-b7</p>
<p><strong>Batch sizes</strong>
- Starting from 1 to 9
- Also used accumulation_steps(steps=2 and 3)</p>
<p><strong>Preprocessing</strong>
- Resize to (640, 320) for segmentation input data. Then resized the masks to (525, 350)
- HorizontalFlip(p=0.25),
- VerticalFlip(p=0.25),
- ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, shift_limit=0.1, p=0.5, border_mode=0),
- GridDistortion(p=0.25)</p>
<p><strong>Optimizers</strong>
- Adam
- RAdam
- SGD</p>
<p><strong>Losses</strong>
- BCEDiceLoss
- IoULoss
- FocalLossBinary
- Custom loss(BCEDiceLoss*0.4 + IoULoss*0.2 + FocalLossBinary*0.4)</p>
<p><strong>Post processing</strong>
- Finding optimum threshold between in the interval 0.3-1 with a 0.005 step for each category
- Finding minimum pixels value for considering the mask a non-Zero one (9000-25000) with a 1000 step for each category</p>
<p><strong>BEST MODEL FOUND</strong>
Best model obtain 0.65803 on public score and <strong>0.65038 on private leaderboard</strong></p>
<p>The configuration for this single model without any k-folds or combination with another arhitecture was:</p>
<p><strong>FPN</strong>+
<strong>se _ resnext101_32x4d</strong>+
<strong>batch size 6(accumulate gradient=2)</strong>+
<strong>RAdam</strong>+
<strong>BCEDiceLoss</strong>+</p>
<p><strong>Preprocessing</strong>:
Resize to (640, 320) for segmentation input data. Then resized the masks to (525, 350)
HorizontalFlip(p=0.25)+
VerticalFlip(p=0.25)+
ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, shift_limit=0.1, p=0.5, border_mode=0)+
GridDistortion(p=0.25)</p>
<p><strong>Post processing thresholds</strong>(cat1: thres=0.335, min_pixels=21000, cat2: thres=0.605, min_pixels=15000, cat3:thres=0.640, min_pixels=20000 , cat4: thres=0.565, min_pixels=16000)</p>
<p><strong>Other things that I wish I had tried but not had time:</strong> </p>
<ul>
<li>Instead of resizing initially to (640, 320) as input for segmentation network and then resizing the resulting masks to (525, 350) I would had iniatially resized to (525, 350) and then use padding for creating a (640, 320) imagine. And for submiting the mask, I would had remove the pixels offsets from the padding in that way avoiding loss due to resizing results from (640, 320) to (525, 350) </li>
<li>Insist with more custom weights on my custom loss (BCEDiceLoss*0.4 + IoULoss*0.2 + FocalLossBinary*0.4)</li>
<li>use AdamW (<a href="https://towardsdatascience.com/why-adamw-matters-736223f31b5d">https://towardsdatascience.com/why-adamw-matters-736223f31b5d</a>)</li>
<li>TTA</li>
<li>Mixed precision training (to see how much increased batch size will help compared with accumulate gradient methodology) and also evaluate fp16</li>
<li>Use and evaluate lovasz loss</li>
</ul> |
2019 Data Science Bowl | 3rd solution - single TRANSFORMER model, link to kernel | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm sorry. It was a traditional Korean holidays until today, so I didn't have time to write this. Thank you for your patience.</p>
<p>First of all, I would like to thank Booz Allen Hamilton for hosting this interesting competition. And congratulates to the all participants and especially the winners! </p>
<p>I like deep neural networks so I prefer to solve all the problems with a deep neural networks. 😃 </p>
<p>I focus on the structure of the input data rather than understanding the input data. And concentrate on making the model's input by avoiding missing information as much as possible, hoping that the model will do more than I expected. 😊 </p>
<p>In other words, I focus <code>less</code> on feature engineering and <code>more</code> on finding a neural net model architecture that fits the data.</p>
<h1>Interesting point</h1>
<ul>
<li>What's interesting is that using position-related information(especially position embedding) decreases local CV score.
<ul><li>The performance of the BERT, ALBERT and GPT2 models was not good. (Because these models use position embedding)</li>
<li>So I used the TRANSFORMER model without position embedding.</li></ul></li>
</ul>
<h1>Pre-processing</h1>
<h3>Aggregation by game_session</h3>
<p>The sequence of installation_id is too long to be used as it is. So I aggregated log data (train_df) by game_session. Please see the example below.
<code>
df = train_df
event_code = pd.crosstab(df['game_session'], df['event_code'])
event_id = pd.crosstab(df['game_session'], df['event_id'])
...
agged_df = pd.concat([event_code, event_id, game_accuracy, max_round])
session_df = df.drop_duplicates('game_session', keep='last').reset_index(drop=True)
session_df = session_df.merge(agged_df, how='left', on='game_session')
</code></p>
<p>The LSTM and TRANSFORMER models in NLP receive sequence of words (or sentence) as input. Similarly, I will use the sequence of game_sessions (or installation_id) as input here. </p>
<h1>Model</h1>
<p>Best private score: 0.564
Single transformer model used.</p>
<h3>TRANSFORMER MODEL BLOCK</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F6a4d5317015ad5235b526b46d1327fee%2Fdsb20192.png?generation=1582589525333424&alt=media" alt=""></p>
<p>Prediction from game_sessions of an installation_id</p>
<h3>The key here is how to create embedding from the game_session.</h3>
<p><code>Categorical columns</code> (such as event_code, title, world, etc...) were embedded respectively. Then, the categorical_vector were obtained by concatenating the embeddings. Next the nn.linear layer is applied for the dimension reduction of the categorical vector.
<code>
self.categorical_proj = nn.Sequential(
nn.Linear(cfg.emb_size*num_categorical_columns, cfg.hidden_size//2),
nn.LayerNorm(cfg.hidden_size//2),
) <br>
</code></p>
<p><code>Continuous columns</code> were embedded directly using a linear layer.
<code>
self.continuous_emb = nn.Sequential( <br>
nn.Linear(num_continuous_columns, cfg.hidden_size//2),
nn.LayerNorm(cfg.hidden_size//2),
)
</code>
* I used np.log1p for normalization of continuous columns.</p>
<h3>hyper parameters</h3>
<ul>
<li>optimizer: AdamW</li>
<li>schedular: WarmupLinearSchedule</li>
<li>learning_rate: 1e-04</li>
<li>dropout: 0.2</li>
<li>number of layers : 2</li>
<li>embedding_size: 100</li>
<li>hidden_size: 500</li>
</ul>
<h1>Modified loss function</h1>
<p><a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/124836">https://www.kaggle.com/c/data-science-bowl-2019/discussion/124836</a>
As mentioned in this link, the 0 and 3 classes of the accuracy_group may be very close.
num_correct can have 0 or 1, if the num_correct has 1 then the accuracy_group increases 3 points.
On the other hand, num_incorrect decreases 1 point when num_incorrect has 1 and decreases 2 points when num_incorrect has 2 or more.</p>
<p>This could be expressed as
```</p>
<h1>num_incorrect[num_incorrect > 2 ] = 2 # Constrained not to exceed 2.</h1>
<p>new_accuracy_group = 3 * num_correct - num_incorrect
```</p>
<p>Using the above equation, we can calculate the real values of 0 to 3 from num_correct, num_incorrect.
Therefore, the prediction of the model is set to [num_correct_pred, num_incorrect_pred]</p>
<p>```
prediction = model(x) # prediction = [num_correct_pred, num_incorrect_pred]</p>
<h1>target = [num_correct; num_incorrect]</h1>
<p><code>
**Then train the model with the modified_loss below.**
</code>
modified_loss = mse_loss( prediction, target )
```</p>
<p>After the training is done, we can use the new_accuracy_group calculated from "num_correct_pred, num_incorrect_pred".
<code>
num_correct_pred, num_incorrect_pred = prediction
new_accuracy_group = 3 * num_correct_pred - num_incorrect_pred
</code></p>
<p>We can also use original accuracy_group to slightly improve performance.
```
prediction = model(x) # prediction = [accuracy_group_pred, num_correct_pred, num_incorrect_pred]</p>
<h1>target = [accuracy_group; num_correct; num_incorrect]</h1>
<p>```</p>
<p><strong>The final_accuracy_group is calculated as below.</strong>
<code>
new_accuracy_group = 3 * num_correct_pred - num_incorrect_pred
final_accuracy_group = (accuracy_group_pred + new_accuracy_group) / 2
</code></p>
<h1>Additional training data generation</h1>
<p>I generated an additional label for game_sessions, where the type is <strong>Game</strong>. From the "correct":true, “correct”:false of event_data, I was able to create num_correct and num_incorrect, and likewise I was able to create an accuracy_group.
The number of additional training samples generated is 41,194.</p>
Pre-training and fine-tuning steps
<ul>
<li>Pre-training step - up to 3 epoch, the model was trained with the original labels + additional labels.</li>
<li>Fine-tuning step - from 4 epoch, the model was trained with the original labels.</li>
</ul>
<h1>Data Augmentation</h1>
<ul>
<li>training time augmentation - For installation_id with more than 30 game_sessions, up to 50% were randomly removed in the old order.</li>
<li>test time augmentation - For installation_id with more than 30 game_sessions, up to 60% were randomly removed in the old order.</li>
</ul>
<h2>Link to kernel</h2>
<p><a href="https://www.kaggle.com/limerobot/dsb2019-v77-tr-dt-aug0-5-3tta?scriptVersionId=27448615">https://www.kaggle.com/limerobot/dsb2019-v77-tr-dt-aug0-5-3tta?scriptVersionId=27448615</a></p>
<p>It is a shame to me sharing the uncleane code. But first I decided to share the kernel and make a clean code. Maybe in two weeks? ;)</p>
<ul>
<li>The training code is also released. I'm sorry it's still unclean code.
<a href="https://github.com/lime-robot/dsb2019">https://github.com/lime-robot/dsb2019</a></li>
</ul> |
ASHRAE - Great Energy Predictor III | [46th Pvt LB] 18th Public LB - Very simple solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ul>
<li>Kernel: <a href="https://www.kaggle.com/wittmannf/0-939-lb-public-blend-leak-valid-ridgecv/">https://www.kaggle.com/wittmannf/0-939-lb-public-blend-leak-valid-ridgecv/</a></li>
</ul>
<p>Most of my time was actually spent on models that turned out to not score as good as the public kernels (NNs and bldg based models). My 18th is thanks to some side experimentation when trying to maximize the tips that have been provided in public Kernels and discussion. </p>
<p>The solution consists of a blend of non-leaked public submissions (I enhanced some of them) using Ridge CV's coefficients when fitting against the leak validation data. For the enhanced models, I discovered that it was more effective to combine them using RidgeCV's coeffs from leak validation data than just averaging them. However, I only had time to that in the <code>simple-data-cleanup-3-models</code>. My public score was 0.939 with leak replacement and 1.04ish without leak replacement. </p>
<p>And here's something interesting: RidgeCV figured out negative coefficient for some submissions, although the sum of all of them was close to 1 (as expected):</p>
<p>```</p>
<h2>Ridge Coefficients</h2>
<p>Sum of coefficients: 0.9994466580815099
half-half-drop-rows-stratify-weekday has weight 0.14
simple-data-cleanup-3-models has weight 0.26
ashrae-kfold-lightgbm-without-leak-1-08 has weight -0.23
another-1-08-lb-no-leak has weight -0.46
ashrae-kfold-lightgbm-without-building-id has weight 0.19
ashrae-energy-prediction-using-stratified-kfold has weight 0.52
ashrae-lightgbm-without-leak has weight -0.15
ashrae-stratified-kfold-lightgbm has weight 0.23
ashrae-2-lightgbm-without-leak-data has weight 0.50
```</p>
<p>All credits go to the authors of the public kernels and <a href="/yamsam">@yamsam</a> 's idea of leak validation. My contribution was the idea of combining them using RidgeCV. </p>
<h3>UPDATE</h3>
<p>As mentioned in <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/125843">this discussion</a>, the following kernel would've ranked between 13th and 16th position of private leaderboard:
- <a href="https://www.kaggle.com/wittmannf/blender-leak-validation-inception?scriptVersionId=25306647">https://www.kaggle.com/wittmannf/blender-leak-validation-inception?scriptVersionId=25306647</a></p>
<p>The only difference is that I performed a second level in the blending process (<code>submission.csv</code> is the output from the first blend):
<code>
submission_paths = [
'/kaggle/input/ashrae-half-and-half-w-drop-rows-stratify/submission.csv', #1.106
'/kaggle/input/ashrae-simple-data-cleanup-lb-1-08-no-leaks/submission.csv',
'/kaggle/input/ashrae-kfold-lightgbm-without-leak-1-08/submission.csv',
'/kaggle/input/another-1-08-lb-no-leak/fe2_lgbm.csv',
'/kaggle/input/ashrae-kfold-lightgbm-without-building-id/submission.csv', #1.098
'/kaggle/input/ashrae-energy-prediction-using-stratified-kfold/fe2_lgbm.csv', #1.074
'/kaggle/input/ashrae-lightgbm-without-leak/submission.csv', #1.082
'/kaggle/input/ashrae-stratified-kfold-lightgbm/submission.csv',#1.075
'./submission.csv'#1.04
]
</code></p> |
TensorFlow 2.0 Question Answering | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the sponsors and kaggle for hosting such an interesting and challenging competition. Also a big thank you to <a href="/cpmpml">@cpmpml</a> for being my teammate.</p>
<h2>Brief Summay</h2>
<p>As our teamname suggests, we did everything with pytorch. In summary, we used 3 roberta-large models which were ensembled by voting. In general input features of our models are very close to bertjoint baseline. We used a learning rate of 1-e5, a batchsize of 16 and simple Adam optimizer with no schedule. All models were trained for 1 epoch.</p>
<p>Roberta 1:
- initialized with roberta-large weights
- stride 128
- prediction of span & 5 answer types (unknown, yes, no, short , long)</p>
<p>Roberta 2:
- initialized with roberta-large weights, then pretrained on Squad2.0
- stride 192
- prediction of span & 2 answer types (short , long)</p>
<p>Roberta 3:
- initialized with roberta-large weights, then pretrained on Squad2.0
- additional linear layer (768→768 + relu) before predicting start, respectively end token
- stride 192
- prediction of span & 2 answer types (short , long)</p>
<p>We optimized thresholds for each of the models and set predictios below threshold to blank. Then we used majority voting to ensemble the 3 models. Besides some smaller tricks, we predicted test set with a stride of 224 to fit inference of 3 models into the kernel.</p>
<h2>Longer Summary</h2>
<h3>Validation scheme</h3>
<p>As always, I start with setting up a solid validation scheme, which ideally has a high correlation to leaderboard. It turned out harder than anticipated, since organisers did not share enough information on the intended metric as well as implemented it wrongly. This first phase was very frustrating and I spent quite some time reverse engineering their mistake in order to reconcile leaderboard scores. After I figured out the metric and shared in forum, organizers changed the metric. Imagine my face in that moment… and believe it or not, it took me another 6 weeks to figure out the new one. At the end we used the dev set of the original NQ dataset as our validation set and had a very high lb correlation.</p>
<h3>Software</h3>
<p>I reused a lot of preprocessing scripts from bertjoint baseline shared by organisers and did all training with pytorch relying on huggingface for transformer weights and code + pytorch-lightning for writing training pipeline.</p>
<h3>Hardware</h3>
<p>I did all training on my home desktop pc (3 GTX1080Ti) and <a href="/cpmpml">@cpmpml</a> on his pc (2 GTX 1080Ti). Training one epoch took quite a while, hence we did not spent much time on hyper-parameter tuning. The training time for Roberta1 was 35h. Finetuning roberta-large on SQuAD2.0 took 30h and finetuning the resulting model to the data of this competition took about 24h when using a stride of 192.</p>
<h3>Architectures and pretrained models</h3>
<p>I fully agree with <a href="/boliu0">@boliu0</a> that is was frustratingly hard to beat the bertjoint baseline. I did a lot of experiments on different preprocessing as well using different (in my opinion more suited) targets. But 99% of what I did was worse than the baseline. So at the end we kept the preprocessing and only adjusted the answer type targets sightly. I used distilbert for a lot of those experiments because due to its size it helps to iterate fast while giving reasonable indication if an idea works or not. <br>
In general the bertjoint baseline suggest to stride over the full answer with a windowing approach and concatenating those windows with the question in order to find if the short answer is contained in the window. One major interesting question is how to aggregate the resulting predictions. Thats where we spent some time because we saw a lot of room for improvement. So what we did is to map the start and end token predictions of each window back to the original answer and create a answer length x answer length heatmap. We then apply some restrictions, like e.g. short span length should be less than 30 tokens, and get the following result.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2F15f5fac1de79c9e7d6d67adb3b45b1de%2FScreenshot%20from%202019-12-21%2009-26-43.png?generation=1579778810842946&alt=media" alt=""></p>
<p>The argmax of this matrix then gives start and end token (here 957:973). Nice thing of this approach is that you can easily blend these matrices over different models. So after we figured that out we tried different model architectures, including all popular ones from the huggingface repo (albert, gpt2, bert, roberta, xlnet) as well as less popular ones like Spanbert. For us roberta-large worked best with some distance to the second best which was spanbert. Considering the time of preprocessing we thought that ensembling 2 or 3 versions of the same model type will be better than ensembling different model types as you need to do preprocessing only once. So we continued training slightly different versions of roberta-large, including pretraining it on SquAD2.0 first, while working on probably the most important part of this competition, namely thresholding of when to set a blank prediction.</p>
<h3>Thresholding:</h3>
<p>Thresholding when using f1 is challenging. Its super important for your overall score but at the same time has high variance, and might not relate to test set. We used different schemes and at the end a 4-way thresholding worked best. We build thresholds for long and short answer type as well as logits of start + end tokens. We determined the thresholds by simple 4d grid search, which was improved by <a href="/cpmpml">@cpmpml</a> using scipy.optimize.minimize. Instead of using the thresholds found by fitting on the dev set directly, we also experimented with using the corresponding quantiles. Our best submission uses that approach.</p>
<h3>Ensembling:</h3>
<p>We elaborated different ensemble methods and chose 2 different ones for our final sub:</p>
<ol>
<li>Apply postprocessing and threshold to model prediction and majority vote between the results</li>
<li>blend model predictions and apply thresholding</li>
</ol>
<p>While 2. preformed better on our val set, 1. performed better on public and private LB</p>
<h3>Wrapping things up and putting into kernel:</h3>
<p>We used several things to speed up the final kernel in order to fit the inference of 3 models in.
- use stride of 224 for test data
- convert model to fp16 for predictions
- use multiprocessing for preprocessing and postprocessing</p>
<p>Thanks for reading. </p> |
Peking University/Baidu - Autonomous Driving | 1st place solution (1/26 details updated) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to everyone and congratulations to all the top teams. I'm on vacation, so I will post the details after the Chinese New Year.</p>
<h1>In brief</h1>
<ul>
<li>data augmentation: h-flip, 3 axis rotate, color, noise, blur.</li>
<li><a href="https://github.com/see--/keras-centernet">keras hourglass centernet</a></li>
<li>perspective transform for efficiency</li>
<li>regression of yaw, cos(pitch), sin(pitch), rot_pi(roll), x, y, z, r</li>
<li>blend 6 results (2 types of head * 3 types of transform)</li>
<li>post process by fitting LB</li>
</ul>
<h1>Especially thanks to these kernels:</h1>
<ul>
<li><a href="https://www.kaggle.com/hocop1/centernet-baseline">CenterNet Baseline</a> <a href="/hocop1">@hocop1</a></li>
<li><a href="https://www.kaggle.com/ebouteillon/augmented-reality">Augmented Reality</a> <a href="/ebouteillon">@ebouteillon</a></li>
<li><a href="https://www.kaggle.com/its7171/metrics-evaluation-script">metrics evaluation script</a> <a href="/its7171">@its7171</a></li>
</ul>
<p>Happy Chinese New Year</p>
<h1>1/26 details updated</h1>
<h1>Network</h1>
<p><img src="https://i.imgur.com/uFNfCFB.jpg" alt="network"></p>
<p>My approach is based on <a href="https://github.com/see--/keras-centernet">keras hourglass centernet</a>.
Some notes:
- <strong><em>6 Dof</em></strong>: regression of yaw, cos(pitch), sin(pitch), rot_pi(roll), x, y, z, distance
- discard <strong><em>XY bias</em></strong> result finally.
- remove <strong><em>Car types</em></strong> and <strong><em>XY bias</em></strong> in my second model.</p>
<p><strong># Perspective transform</strong>
Two purpose:
- reduce the size gap between small(far) and large(near) cars.
- cover more outliers without extending image.</p>
<p>I find the model don't predict well on the large car when I increase the input size, so I make them smaller. The extra benefit is to enclose outliers.</p>
<p><strong>Original image:</strong>
<img src="https://i.imgur.com/bNABgvS.jpg" alt="original">
<strong>Transformed:</strong>
<img src="https://i.imgur.com/Xe396jz.jpg" alt="transformed">
<strong>Coverage comparison:</strong> (dots denote the GT location)
<img src="https://i.imgur.com/JJYfmNn.jpg" alt="coverage">
<strong>Validation result with outlier:</strong> (red dot: GT, green: predict heat map)
<img src="https://i.imgur.com/s6UAHIO.png" alt="transformed"></p>
<p><strong># Coordinate reference</strong>
I think the same feature in different locations should get different results. So I join this layer to get better predictions, and apply random crop when training.</p>
<h1>Data augmentation</h1>
<p>I use: h-flip, camera rotate, color, noise, blur.</p>
<p><strong># Camera rotation</strong>
This is the most important part of my approach. Since I only have 4001 training images, 5 bad, and 256 for validation. It is easily to overfit without rotation augmentation. </p>
<p><strong>The augmentation looks like:</strong> (center is original image)
<img src="https://i.imgur.com/kONFCcy.jpg" alt="rotation augmentation">
Please refer to <a href="https://www.kaggle.com/outrunner/rotation-augmentation">this kernel</a> for details.</p>
<h1>Training</h1>
<ul>
<li>Focal loss for heat map</li>
<li>Huber for regression</li>
<li>Adam optimizer</li>
<li>Manually adjust learning rate from 10^-3.5 to 10^-5.5</li>
<li>About 0.4M iterations</li>
<li>Train: full network -> part of -> head only -> full ...</li>
<li>Change input size (random corp) and batch size every iteration</li>
<li>One 2080Ti per training. (I have 2)</li>
<li>Total 6 models, 2 heads * 3 transforms (different parameter and input size)</li>
</ul>
<h1>Inference</h1>
<p>Please refer to <a href="https://www.kaggle.com/outrunner/autonomous-driving-1st-place-solution-inference">this kernel</a> for details.</p>
<p><strong># Test time augmentation</strong>
Flip and multiple transformations, weighted average the predictions.</p>
<p><strong># Blending</strong>
Transform predictions to one model's transformation, weighted average 6 results.</p>
<p><strong># Weighted average neighborhood</strong>
When decoding, not only use the local maximum point but also take into account
the prediction around it.</p>
<h1>Metric probing and Post processing</h1>
<p>This is the first time I join a competition without knowing the evaluation metric. The probing is interesting, but there are some weird characters in the metric.</p>
<p><strong>Probing procedure:</strong>
<img src="https://i.imgur.com/ESMF5qS.jpg" alt="Probing process">
<strong># Image wise</strong>
Split test images to two sets A and B, then: <strong>score(A) + score(B) = score(A+B)</strong></p>
<p><strong># Confidence independent</strong>
So the metric is something like <strong>F1</strong> or <strong>TP/(TP+FN+FP)</strong></p>
<p><strong># Rotation</strong>
θ and θ+2π differently, so the score is directly impacted by roll prediction. Therefore, I train a model to predict global roll and get the score improvement.</p>
<p><strong># Translation</strong>
The most weird thing is that when I shift X by some pixels, the LB score change significantly. So I guess the Metric is:
<code>sh
(abs(x-xp)/abs(x) + abs(y-yp)/abs(y) + abs(z-zp)/abs(z))/3
</code>
<strong>And increase the threshold when abs(x) is small:</strong>
<img src="https://i.imgur.com/BmYJ9CO.jpg" alt="Confidence threshold"></p>
<p><strong>The overall post-processing:</strong></p>
<ul>
<li>[opt] replace x, y by X, Y, z, r (just like everybody do)</li>
<li>[roll] replace instance roll by global roll</li>
<li>[xs] shift X 2 pixels (don't know why)</li>
<li>[rx] drop some cars whose x are near by zero, and keep a least car number per image</li>
<li>[dz] drop duplicate cars</li>
</ul>
<p><strong># table of results:</strong>
<img src="https://i.imgur.com/WSMGpUH.gif" alt="table of results">
<em># parameters are the same as final submission, and some procedures are dependent</em></p> |
NFL Big Data Bowl | Private LB 37th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First off, congratulations to the winners and thanks to NFL & Kaggle for hosting the competition!</p>
<p>This was one of my first major Kaggle competitions and was quite surprised to end up in the top 50 so thought I’d share a few details of my solution (I waited until the very end for fear of a disastrous code break in the final weeks). I started 52nd on the public LB and ended up in 37th.</p>
<p>I focussed mainly on feature engineering with the spatial variables and ended up passing these into a fully connected 128 (dense) x 64 (dense) x 199 (softmax) neural network.</p>
<p>One of my regrets was spending so much time on an initial model which had fundamental issues with the features and expecting small tweaks to yield a breakthrough.</p>
<p>An entire rethink of my feature engineering yielded a breakthrough when I finally got a grip on the spatial and temporal features.</p>
<p>The first portion of my feature vector as input was fairly standard rusher specific info which you would expect, but the majority of the vector was filled with spatial counts (both boxed and radial) of defenders relative to the rusher. This was done using box and radial counts at T = 0, T = 0.5, T = 1.0, using the speed and direction field to ‘move’ the play forward in time. </p>
<p>Pictures always make more sense:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3982690%2Fa639de3527be947752f674c1245ec9d0%2Frusher_pic.png?generation=1578518825464910&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3982690%2Fe1304d0eebd13d4cf37b27b2c6117140%2Frusher_pic_radius.PNG?generation=1578519155871188&alt=media" alt=""></p>
<p>Adding a few aggregate stats (min, mean) on the distance of defenders relative to rusher at each time gave me around 80 features in the final model.</p>
<p>I had wanted to incorporate the offensive players, but simple counts didn’t yield me any significant improvement. More nuanced treatment was needed, given their role as blockers.</p>
<p>Beyond this, I spent most of the time making my code as robust as possible (rather than trying any particular tweaks – data augmentation, post processing etc. ) which seemed to turn out ok!</p> |
ASHRAE - Great Energy Predictor III | Sharing what helped | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was unfortunate that this competition was plagued by leaks, but it was pretty much a straight-forward energy forecasting problem. I learned a bit about time-series CV, which was my reason for joining. With that said, here are things that I’ve added to my approach. In retrospect, I could have data cleaned better with the non-electricity meters, but live and learn.</p>
<p><em>weather</em>
Weather is known to be one of the strongest drivers of energy consumption. With that said, typical approaches have been to lag and/or shift weather. Recent literature on energy forecasting have explored exponential weighted moving averages; this helped my model extremely. </p>
<p><em>trend</em>
Adding a trend feature in forecasting tends to usually help than not. In this case, I added one that mimicked the winning solutions from GEFCOM. Had there been more than one year of data, I would have included an intra-year trend as well.</p>
<p><em>2-phase CV</em>
I replicated CPMP’s 2-phase for training and predicting.</p>
<p>Some things that I wished I tried:
- partially-escaping seasonality by developing models at different subsets of time; in my work, I develop models by hour. This helps to escape auto-correlation issues.
- data-clean: it was obvious the distributions for non-electricity were skewed due to 0 readings. I saw on the last night of competition that certain meters were 0 imputed while the corresponding day/time were missing in the electricity profile.
- hedge leaks better. I was torn with leveraging leaks because it took away the task of forecasting energy in a real-world setting; imagine trying to explain this method to a public utilities commission. My intent was to learn something to bring back to my job. </p>
<p>With that said, I was still happy with my approach in that most of the methods I used would have passed the PUC sniff test and it placed quite high on the leader board. Given the close spread of the RMSE towards the top, it was good to know that I didn't have to rely much on gimmicky methods to obtain a decent forecast. In this case, domain knowledge came in handy.</p>
<p>I also want to give a huge shout-out to <a href="/cpmpml">@cpmpml</a>. I learned a lot from him over the years in regards to forecasting, and my successes in my job were stemmed from him. Also, <a href="/mmotoki">@mmotoki</a> for pushing me to do better. I never appreciated what it meant to have a rival, but my successes on Kaggle wouldn’t have happened it it weren’t for him. he deserves number 1 given his crazy work ethics; thanks for teaching me to push myself.</p>
<p>UPDATE (for DietHard):
Nothing special:
```</p>
<h1>holidays</h1>
<p>from datetime import datetime, date, timedelta
from pandas.tseries.holiday import Holiday, AbstractHolidayCalendar, nearest_workday, MO, TU, WE, TH, FR
from pandas.tseries.holiday import USMartinLutherKingJr, USPresidentsDay, GoodFriday, USMemorialDay, USLaborDay, USThanksgivingDay
from pandas import DateOffset, Series, Timestamp, date_range</p>
<p>def daterange(date1, date2):
for n in range(int ((date2 - date1).days)+1):
yield date1 + timedelta(n)</p>
<p>class Holiday(AbstractHolidayCalendar):
rules = [
Holiday("New Year's Day -1", month=12, day=31),
Holiday("New Year's Day", month=1, day=1),</p>
<h1>USMartinLutherKingJr,</h1>
<h1>USPresidentsDay,</h1>
<h1>USMemorialDay,</h1>
<pre><code> Holiday('Independence Day', month=7, day=4, observance=nearest_workday),
USLaborDay,
</code></pre>
<h1>Holiday("Veteran's Day", month=11, day=11),</h1>
<pre><code> USThanksgivingDay,
Holiday("Christmas Eve", month=12, day=24),
Holiday('Christmas', month=12, day=25, observance=nearest_workday),
Holiday("Boxing Day", month=12, day=26),
]
</code></pre>
<p>def create_holidays(year_start, year_end): <br>
cal = Holiday()
holidays = []
for i, year in enumerate(range(year_start, year_end+1)):
# default holidays
holiday_dates = cal.holidays(start=date(year,1,1), end=date(year,12,31))
for holiday in holiday_dates:
holidays.append(holiday.date())
return holidays</p>
<p>holidays = create_holidays(2016, 2018)
df['is_holiday'] = df['timestamp'].isin(holidays) * 1</p>
<h1>exponential weighted moving average</h1>
<p>ewma = pd.Series.ewm
weather['air_temperature_ewma_day'] = ewma(weather['air_temperature'], span=24, adjust=True).mean()
weather['air_temperature_ewma_week'] = ewma(weather['air_temperature'], span=24*7, adjust=True).mean()
weather['air_temperature_ewma_month'] = ewma(weather['air_temperature'], span=24*30, adjust=True).mean()</p>
<h1>time trend</h1>
<p>time_trend = pd.DataFrame({'dt': pd.date_range(config['timeframe']['start'], config['timeframe']['end'], freq='H')})
time_trend['time_trend'] = time_trend["time_trend"] = range(0, 0+len(time_trend))
time_trend_dict = create_dictionary(time_trend["dt"], time_trend['time_trend'])
df['time_trend'] = df['timestamp'].map(time_trend_dict)
df['time_trend'] = df['time_trend'].astype(np.float16)
```</p> |
ASHRAE - Great Energy Predictor III | 20 Private LB rank solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was a long run and finally it is completed and we can share some tricks and ideas that we have used. According to this official <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/123462">discussion</a> and this <a href="https://www.kaggle.com/robikscube/ashrae-leaderboard-and-shake">Kernel</a> [ods.ai]PowerRangers has taken 20 Private LB rank. So let's start :)</p>
<p>Sorry for all misspelling and not really good code style in Kernels and here too ( LGBT-> LGBM, Bland-> Blend, ...). We did not have enough time to write really good code base :( </p>
<h1>Our team had 2 main obstacles:</h1>
<ul>
<li>Time (The exams were coming)</li>
<li>Computing resources
As for computing resources - we carried out all our experiments and development in Kaggle Kernels. So we had to deal with RAM and execution time limitations.
<h1>Our main scheme:</h1></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1690820%2F17ea6a8a8d308c1f47886461830cc5e1%2Fwin_scheme.dio(1" alt="">.png?generation=1577617664090195&alt=media)</p>
<h1>First of all, we created some Baseline models:</h1>
<ol>
<li>We started from statistics - simple mean or median over each <code>meter</code>. Then same statistics over several categorical features ( <code>meter</code>, <code>day_of_week</code>, <code>building_id</code>, <code>month</code> ). You can find it in this <a href="https://www.kaggle.com/vladimirsydor/naivemeanpredictor">here</a> <code>NaiveMeanModel</code> . Of course, they were performing really poor (1.39-1.4 on public LB). But we used these feature for more sophisticated models.</li>
<li>Then we tried <a href="https://www.kaggle.com/vladimirsydor/randomforestbaseline">RandomForestRegressor</a>. But it was pretty slow and perform not really good.</li>
<li>Finally we realized that <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1690820%2F8a17e2376bd2e09f3d756e37077c1aae%2F2x3fpr.jpg?generation=1577608619775546&alt=media" alt=""></li>
</ol>
<h2>So we made our first experiments with LGBM and passed into silver zone</h2>
<h1>And now it was time for PREPROCESSING!</h1>
<p><a href="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1690820%2Fe958aa1905d44ec6a5dd083a2b4e95e5%2FData%20Science%20Tom%20and%20Jerry.jpg?generation=1577608845429010&alt=media"></a></p>
<h2>Preprocessing:</h2>
<ul>
<li>First off all - detecting outliers in nearly the same way, as in this <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/122471">discussion</a>, but less sophisticated.</li>
<li>Weather data preprocessing - interpolation on NaNs and creating features <code>is_nan</code> for these columns </li>
<li>Adding features: <code>day</code>, <code>day_of_week</code>, <code>month</code>, rolling features ( <code>{feature}_mean_lag{window}</code> ), max/min features by categorical features ( <code>air_temperature_max</code>)</li>
<li>Finally we tried to add Leak data from 2017-2018 but it was not a good idea for us. But adding leak data from 2016 gave us some more data for training</li>
</ul>
<h2>You can have more detailed look at our Data Preprocessing and Feature engineering:</h2>
<ul>
<li><a href="https://www.kaggle.com/vladimirsydor/baseline-preprocessing">Preprocessing</a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/baseline-preprocessing-leaks-train-fe">Preprocessing + Feature Engineering + Leaks from 2016</a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/baseline-preprocessing-leaks">Preprocessing + Leaks from 2016-2018 )</a> . Here we created several datasets, taking leaks from different sites in order to train uncorrelated models.</li>
</ul>
<p>Also most ideas ( and code :) ) were taken from these kernels:
- <a href="https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks">first</a>
- The second one was deleted</p>
<h1>Now was time for modelling!</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1690820%2F0a78a2dcebaf5a9d74f185f741a6db17%2F1x7P7gqjo8k2_bj2rTQWAfg.jpeg?generation=1577610244398959&alt=media" alt=""></p>
<h1>Models:</h1>
<h2>1. Of course, LGBM:</h2>
<ul>
<li>We are nor really professionals in LGBM hyperparams optimization + it was training really long, so we did not spend a lot of time for hyperparams optimization. But what we find out is that it was not overfitting much and increasing the number of leaves mostly helped the model (we tried 145 and 82)</li>
<li>Even with high LR our Boost has not converged even for 7k iterations. But Leak scores and Public LB scores did not really improve for Boost trained for 7k, comparing with boost on 5k. Also it was a great difference for Boost trained for 3k. We could not try more iterations, cause Kernel has time limitations :(</li>
<li>Also we had 5 kernels for one Booost in order to train it in CV style and then blend results.</li>
<li>One more interesting fact is that - all DataFrames that passed to LGBM model are converted to <code>float64</code> and if you have real BIG DATA, you will RUN OUT OF MEMORY, so you need to convert it into <code>np.array</code>. Small tip but it helped us a lot :)
You can take a look at our Boost models here:</li>
<li><a href="https://www.kaggle.com/vladimirsydor/lgbt-on-pp-leaks-train-fe-fold-1">with leaked data from train(2016)</a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/lgbt-on-leaks-fold-1">with leaked data from train(2016) and test(2017-2018)</a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/lgbt-fold-1">without leaked data</a></li>
</ul>
<h2>2. Neural Net ( going out from forest!!! )</h2>
<p><a href="https://www.kaggle.com/vladimirsydor/nn-on-pp-leaks-train-fe-fold-1">Kernel</a>
Inspired by <a href="https://www.kaggle.com/abazdyrev">abazdyrev</a> and his <a href="https://www.kaggle.com/abazdyrev/energy-consumption-keras-approach">Kernel</a>
- First of all, usual preprocessing for NN: scaling and label encoding for Embeddings. You can find it in <code>PreprocessingUnit</code>
- So we have chosen Embeddings for categorical features, because we have really a lot of data and they could train ( I hope )
- Also we tried several optimizers (Nadam, Adam, Adamax) and as for final activations ( no-activation and softplus). Best results were for softplus and Adamax. We did not try to submit NN predictions alone but on Leak Validation it even outperformed LGBM ( WOW!!! ).</p>
<h1>And finallly we are ready for the most interesting part - BLEEEEEEEEEND</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1690820%2F93bdaffad2efe430a77c96dec638e842%2Findex.jpeg?generation=1577612207103871&alt=media" alt=""></p>
<p>We were inspired by this <a href="https://www.kaggle.com/khoongweihao/ashrae-leak-validation-bruteforce-heuristic-search">kernel</a> , but it was too simple for us :)</p>
<p><a href="https://www.kaggle.com/vladimirsydor/bland-by-leak">Kernel</a>
1. Firstly we gathered a lot of submissions ( our and some public ones ), finally we had 32 .csv files
2. Some public kernels and some our submissions were created with models trained on Leak data , so we can not use them for Leak Validation and they were excluded. Also we exclude some bad submissions. You can find out them in <code>EXCLUDE_LIST</code>
3. Then we have to choose several submissions for BLENDING:
- Firstly, we tried Hyperopt on indexes for median blending. We have taken 10 submissions. Also we penalized Hyperopt for taking same files for blending
- Secondly, we have created some kind of Genetic Algorithm for the same purpose. Mostly it was taken from our practical work from the university.
- Thirdly, we tried to stack submissions with one layer Perceptron in CV style but it was hardly overfitting, so we did not try it more.</p>
<h1>Finally all that Leaked data was added to our final submission.</h1>
<p><a href="https://www.kaggle.com/vladimirsydor/add-leak">Kernel</a> </p>
<h1>Also add some less valuable Kernels to build the complete scheme:</h1>
<ul>
<li><a href="https://www.kaggle.com/vladimirsydor/bland-nn-on-pp-leaks-train-fe">Blending NN trained on different Folds</a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/bland-lgbt-on-pp-leaks-train-fe">Blend LGBM (PP + Leaks Train + FE) on different folds </a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/bland-lgbt-folds">Blend LGBM on different folds </a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/bland-lgbt-on-leaks">Blend LGBM (trained on train + leaked data) on different folds </a></li>
<li><a href="https://www.kaggle.com/vladimirsydor/leakaggregator">Leak Aggregator </a></li>
</ul>
<h1>Great thanks to my team members:</h1>
<p><a href="https://www.kaggle.com/zekamrozek">Evgeniy</a> and <a href="https://www.kaggle.com/vladyelisieiev">Vladislav</a>. They made a real good job!!!</p>
<h2>Good Kaggling and happy Holidays !!!</h2> |
Google QUEST Q&A Labeling | 18th place solution (hypotheses on the datasets) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Google and Kaggle for hosting this exciting competition. Also, congratulations to all teams! </p>
<p>I'm really happy to see many great solutions in discussion threads right after the end of the competition. It made me want to share my own solution and insights with others. </p>
<h2>Hypotheses</h2>
<p>Before describing modeling and post-processing, let me start by explaining about my hypotheses on this competition's datasets. </p>
<h3>Meaning of Target Values</h3>
<p>We see that each label has a strange set of discrete target values such as {3/9, 4/9, 1/2, 5/9, 6/9, 7/9, 5/6, 8/9, 9/9}). Although it has been already discussed in some threads like <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/120772">https://www.kaggle.com/c/google-quest-challenge/discussion/120772</a>, I have a different hypothesis on how these target values are generated. My hypothesis mainly consists of the following points:
1. Each target value is obtained from three raters.
2. Each rater chooses one of multiple answer values when annotating some label value. Each label has its own set of answer values. For example, question_well_written has {1, 2, 3}, question_type_choice has {0, 1}, and answer_satisfaction has {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}. I assume answer values 1, 2, and 3 for question_well_written corresponds to "No", "Maybe", "Yes", respectively. <br>
3. Each target value corresponds to (the sum of three annotator's answer values) / (the maximum possible sum of answer values). For example, if three annotators give answer values 2, 2, and 3, respectively for question_well_written, its target value becomes (2 + 2 + 3) / (3 + 3 + 3) = 7 / 9.
4. We are able to have target values {3/9, 4/9, 5/9, 6/9, 7/9, 8/9, 9/9} for question_well_written with 1., 2., and 3. However, it doesn't cover some of actual values 1/2 and 5/6. I guess these values are obtained by cases where there are only two raters for some target values. </p>
<p>Although 4. makes me less confident on my hypothesis, I still believe having three raters for each target is more realistic in comparison with having more than 10 raters for each target. </p>
<h3>Train/Test split</h3>
<p>We can obtain the following information:
- There are multiple question-answer pairs sharing common questions in the training dataset.
- There are no such pairs in the test dataset (both public and private).
- The training and test datasets don't share any common question. </p>
<p>This list makes me assume the following dataset split:
- Training: (questions with frequency = 1) + (questions with frequency > 1).
- Test (public): questions with frequency = 1
- Test (private): questions with frequency = 1.
- These three datasets don't share common questions. </p>
<p>I started using only questions with frequency = 1 for validation based on this assumption. It changed my single model's CV score from 0.420 to 0.382, which is much closer to my score in test datasets. </p>
<h2>Modeling</h2>
<h3>Loss function</h3>
<p>I used binary cross entropy for each target after applying rank-based min-max scaling. Min-max scaling is same with the one mentioned in 3rd place solution (<a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129927">https://www.kaggle.com/c/google-quest-challenge/discussion/129927</a>). The use of min-max scaling is based on my hypothesis on target values. For example, I assume 3/9 for question_well_written means very strong "No". </p>
<p>In addition, I replaced each target value with its rank before min-max scaling. It boosts models' performance on labels having skewed target value distributions. The following figure is a distribution of answer_well_written's target values. In this case, separating questions with target values 8 / 9 and 1 has more impact on Spearman's rho in comparison with separating questions with target values 0.5 and 8 / 9. The use of rank-based min-max scaling convert a set of target values from {3/9, 4/9, 1/2, 5/9, 6/9, 7/9, 5/6, 8/9, 9/9} to {0, 0.001, 0.004, 0.008, 0.042, 0.137, 0.233, 0.482, 1.000} and encourages models to detect differences between questions that result in larger impacts on Spearman's rho. I got +0.01 by rank-based min-max scaling in CV.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F145393%2Fe3c21971901fdc4cec33b6691f0835b6%2FScreenshot%20from%202020-02-12%2000-50-39.png?generation=1581436276754519&alt=media" alt="answer_well_written"> </p>
<h3>Architecture</h3>
<p>My solution is based on two architectures. </p>
<strong>Model 1</strong> (Public LB = 0.392, CV = 0.383)
<p>It consists of a single BERT (or XLNet) encoder. Input comprises of question_title (no truncation), question_body (head-tail truncation), and answer (head-trail truncation). The figure below depicts the overview of its structure. Although all labels share the same encoder, each label has its own independent classification head. Features for each label are collected from three sections with three pooling methods, max-pooling, avg-pooling, and attention-pooling. Then, collected features are passed to a dropout and a single linear layer. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F145393%2Ff38ab7835bc79641a027e635c97b4711%2Fmodel1.jpg?generation=1581517455669978&alt=media" alt="model1"></p>
<strong>Model 2</strong> (Public LB = N/A, CV = 0.390)
<p>My second architecture consists of two encoders. It's inspired by <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a>. The figure below depicts the overview of its structure. The first encoder takes question_title (no truncation) and question_body (head-tail truncation) as its input, and the second one takes question_title (no truncation) and answer (head-tail truncation) as its input. Its head classifier's structure and a features extraction method is almost same with Model 1.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F145393%2F39fb09d700dcec1777d1f84248d008dd%2Fmodel2.jpg?generation=1581517473360813&alt=media" alt="model2"></p>
<ul>
<li>Ensemble:
<ul><li>Model 1 (BERT) * 0.2 + Model 1 (XLNet) * 0.2 + Model 2 (BERT) * 0.3 + Model 2 (XLNet) * 0.3 </li>
<li>CV: 0.404</li></ul></li>
<li>Training settings
<ul><li>Model 1 </li>
<li>BERT: batch size = 12, optimizer = AdamW, lr = 5e-5</li>
<li>XLNet: batch size = 10, optimizer = AdamW, lr = 4.5e-5</li>
<li>Model 2</li>
<li>BERT: batch size = 12 (6 * 2), optimizer = AdamW, lr = 5e-5</li>
<li>XLNet: batch size = 12 (4 * 3), optimizer = AdamW, lr = 4.5e-5</li></ul></li>
</ul>
<h2>Post Processing</h2>
<p>I tried to find a list of segments that should have same values for each label by using a greedy algorithm based on OOF predictions.
1. First, this algorithm tries to find a value x1 such that we can maximize Spearman's rho on OOF predictions by replacing prediction values in [0, x1) with x1 / 2.
2. Then, it tries to a value x2 (>= x1) such that we can Spearman's rho by replacing prediction values in [x1, x2) with (x1 + x2) / 2
3. Keep this procedure while it successfully finds such xi. </p>
<p>I checked score improvement on all labels by 3-fold validation, and I decided to use this post processing on labels that I could see score improvement larger than 5e-4. This post processing increased both of CV and LB scores by around 0.03. </p>
<h2>External data for predicting question_type_spelling </h2>
<p>If we open an URL for some stackexchange page (<a href="https://english.stackexchange.com/questions/522357/when-did-spelling-ic-words-ick-start-stop-being-popular">https://english.stackexchange.com/questions/522357/when-did-spelling-ic-words-ick-start-stop-being-popular</a>), we can see there are some tags at the bottom of question bodies. There are tags that are closely related to spelling (e.g., "orthography" and "suffixes"), and we can retrieve tags for almost all questions in the competition's datasets. </p>
<p>I collected top-200 tags that most frequently appear with "orthography" and chose 70 from them by removing irrelevant ones like "american-english". I made a very simple classifier that just returns (host is "english.stackexchange.com" or "ell.stackexchange.com) & (question has one of 70 tags). This classifier gives me high Spearman's rho (0.5-0.7) for question_type_spelling in the training dataset and the public test dataset. However, its score looks worse than scores obtained by a standard use of model prediction and post-processing. I'm so sad...</p> |
Google QUEST Q&A Labeling | Private LB 10th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to the winners and Thanks to my teammates. ( <a href="/theoviel">@theoviel</a> <a href="/cl2ev1">@cl2ev1</a> <a href="/mathurinache">@mathurinache</a> <a href="/titericz">@titericz</a> ). Here's a brief overview of things that matter in our solution</p>
<h3>Loss function</h3>
<ul>
<li>We sticked for a long time with the BCE and couldn’t improve results by changing it. We however managed to find weights that nicely fitted our post-processing policy.</li>
</ul>
<p><code>
loss_fct = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([
0.9, 1, 1.5, 0.8, 0.8, 0.8, 0.96, 1.1, 1.1, 3, 1, 1.1, 2, 3, 3, 2, 1, 2, 1, 2, 0.9, 0.75, 0.9, 0.75, 0.75, 0.7, 1, 2.5, 1, 0.75]))
</code></p>
<ul>
<li>Multiple other experiments with custom loss functions were conducted, but BCE performed the best.</li>
</ul>
<h3>Training stuff</h3>
<ul>
<li>1 epoch with frozen encoder + 2 epochs with with everything unfrozen</li>
<li>Linear lr scheduling, with a custom learning rate depending on the layer. The transformer has a lower one as it is pretrained, and the closer to the output, the larger the learning rate</li>
<li>AdamW with <code>betas=(0.5, 0.999)</code> and a no bias decay of 1</li>
<li>Batch size is 64 for the first epoch, and then the larger we can fit on our gpus with sometimes an accumulation step</li>
</ul>
<h2>Models</h2>
<ul>
<li><p>Our solution is a Ensemble of 4 models, (3 BERT-Large + 1 BERT-Base)</p></li>
<li><p>We made most experiments using the bert-base-uncased architecture, and managed to build a strong pipeline about 1 week before the end of the competition. This enabled us to switch easily to bigger ones which in the end made the strength of our ensemble. </p></li>
<li><p>We build 4 different architectures on top of the transformer, and used two of them overall.</p></li>
</ul>
<h3>Bert Base Uncased</h3>
<ul>
<li><p>Here is the idea about the custom arch one we picked for our bert-base approach :</p>
<ul><li>Input is <code>[CLS] title [Q] question [A] answer [SEP]</code> and <code>[0, … 0, 1, …., 1, 2, …, 2]</code> for ids. We initialed special tokens with values of the <code>[SEP]</code> token. Custom ids were also initialized with the model values.</li>
<li>Custom head : take the <code>[CLS]</code> token of the last <code>n=8</code> layers, apply a dense of size <code>m=1024</code> and a <code>tanh</code> activation, then concatenate everything. </li>
<li>Embeddings for the <code>category</code>and host <code>column</code> (+ <code>tanh</code>). We concatenate them with the output of the custom pooler and obtain the logits.
Some text cleaning (latex, urls, spaces, backslashes) was also applied</li></ul></li>
<li><p>This model is the only one that uses text cleaning and embeddings, it helps for diversity I guess. </p></li>
</ul>
<h3>Bert Larges</h3>
<ul>
<li>They repose on the same architecture. We use two inputs : <br>
<ul><li>[Q] title [SEP] question [SEP]<code>for tokens and</code>[0, … 0, 1, …., 1]<code>for ids</li>
<li>[A] title [SEP] answer[SEP]<code>for tokens and</code>[0, … 0, 1, …., 1]</code> for ids</li>
<li>[Q] and [A] start with the value of the [CLS] token</li></ul></li>
<li>Again, custom pooling head. Values of <code>n</code>and <code>m</code>are below. </li>
<li><p>Depending on the column, we either predict the value with only the pooled [Q] token, only the * </p>
<ul><li>[A] token or both. The policy chosen is the following :
<code>
self.mix = [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 2] # 0 = both, 1 = q, 2 = a
</code></li></ul></li>
<li><p>Concerning the pooler, it differs a bit :</p>
<ul><li>Bert large cased : <code>n = 24</code>, <code>m = 128</code></li>
<li>Bert large uncased : <code>n = 24</code>, <code>m = 128</code></li>
<li>Bert large uncased wwm : <code>n = 8</code>, <code>m = 768</code></li></ul></li>
</ul>
<h2>Post-processing</h2>
<p>Detailed here : </p>
<p><a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129901">https://www.kaggle.com/c/google-quest-challenge/discussion/129901</a></p>
<h2>Results</h2>
<h3>Single model</h3>
<ul>
<li><p>Our best model is the BERT-Large Whole Word Masking one, trained using Weighted BCE Loss gave : </p></li>
<li><p>With Post-Processing:</p>
<ul><li>Private LB : <strong>0.41919</strong></li>
<li>Public LB : <strong>0.46428</strong></li>
<li>CV: <strong>0.454</strong></li></ul></li>
<li><p>Without post-processing :</p>
<ul><li>Private LB : <strong>0.38127</strong></li>
<li>Public LB : <strong>0.40737</strong></li>
<li>CV: <strong>0.415</strong></li></ul></li>
<li><p>Which is a +0.06 boost on public and +0.04 on private. As you can see our single models are not that strong.</p></li>
</ul>
<h3>Ensemble</h3>
<ul>
<li>Our best selected solution is a simple average of the 4 mentioned model.
<ul><li>Private LB : <strong>0.42430</strong></li>
<li>Public LB : <strong>0.47259</strong></li></ul></li>
</ul>
<p>Thanks for reading, happy kaggling !</p> |
Google QUEST Q&A Labeling | How We Found "Magic" (13th Solution Overview) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is my first NLP competition and I am grateful to have gained my first gold. It was a great pleasure working with my talented and extremely hard working teammates <a href="/chanhu">@chanhu</a> <a href="/xiaojiu1414">@xiaojiu1414</a> <a href="/serigne">@serigne</a> <a href="/seif95">@seif95</a>.</p>
<p>Our ensemble is nothing different from what other teams have shared, which includes bert base, bert large, xlnet, and roberta. I wanted to share a little bit my thought process on the submission trick. </p>
<h3><strong>Before that</strong></h3>
<p>a few modeling tricks we have used are
1) separate regression heads for questions and answers,
2) head and tail tokenization
3) concatenated [CLS] tokens as regression input
4) USE difference and entity embedding as external features in Bert</p>
<p>I personally thought the magic was going to be clever loss functions and I tried many different losses including customized BCE based on ranking, but nothing significantly outperformed vanilla BCE. One thing worth noting is that, in the early stage of the competition, I trained an MSE model, which gave me a decent boost after simple blending with my BCE models, but it does not work well with the "magic".</p>
<h3><strong>Back to the "magic"</strong></h3>
<p>My teammate has shared <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129926">our post-processing code</a>. I am going to explain how we came up with each one of them. </p>
<p>The first small trick is the culture list. This part was only meant for the 19th column, which is <code>question_type_spelling</code>. This came from the observation that this column is only non-zero when the host is <code>english.stackexchange.com</code> or <code>ell.stackexchange.com</code>.</p>
<p>The second (and the more significant) part of the trick is building out equalities among the predictions, based on <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">this discussion</a>. </p>
<p><em><strong>The very first attempt</strong></em> was to round the output assuming there were 90 raters. I believe many of you might be familiar with this assumption. This gave us a big boost about 0.03. This was when I found out that my blending of BCE and MSE did not work so well when applying this trick. The blended model performed far worse than my single BCE model after post-processing. </p>
<p><em><strong>An improvement</strong></em> on this trick came from a deeper look at the train labels. Assuming the 90-rater assumption were true, if I were among the raters, I would have given random points to some of the questions. This means if these scores are really given by humans, there should be 90 (or close) unique values for each column. It was easy to deduct the actual number of raters for each column based on the number of unique values. One comforting fact is that the lists of unique values are always the same for the columns with the same number of raters. </p>
<p><em><strong>This improvement did NOT work as I expected initially.</strong></em> But I really believed this was the way to go from my observations. What I then realized was that because the number of raters was too sparse and the original predictions of some columns were really small, the rounding might have squeezed many of the predictions to 0. <strong><em>Min-max scaling</em></strong> these predictions followed naturally considering the competition metric only cares about ranking rather than the actual values. Scaling gave us another ~0.02 boost. </p>
<h3><strong>At the end</strong></h3>
<p>Again, I had great time working on this competition and learning about NLP. But as you can see, the main boost came from post-processing. I guess one takeaway is that if you hit the bottleneck of a competition, always take another look at the data. </p>
<p>Cheers!</p> |
Google QUEST Q&A Labeling | our magics | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to the winners. It is a hard and funny competition. Thanks my teammates. ( @Yang Zhang @chizhu <a href="/chensheng">@chensheng</a> @Guan_Yuhang ). </p>
<p>We got some magics in this competition.</p>
<h1>1. We split the 30 targets into two parts.</h1>
<p>One is question part, another is answer part. The question part has 21 targets. The answer part has 9 targets. We train two model to predict the two part.
It let us get a higer score(0.39x) in the very beginning. </p>
<h1>2. We use the customed model for transformers.</h1>
<p>a) Add the additional layers. For roberta base model, we add the 13th layer with random initialize. For roberta large model, we add the 25th layer with random initialize.
b) concate CLS token embedding and SEP token embedding together. </p>
<p>c) concate the category features and length features</p>
<h1>3. Pre-process for data</h1>
<p>We tried many pre-process methods for transformers. Only one is useful.
def cln(x): return " ".join(x.split())</p>
<p>The very simple function boost 0.005 for the roberta base lb score .</p>
<h1>4. We use external datas for train .</h1>
<p>It comes from <a href="https://www.kaggle.com/stackoverflow/stackoverflow">https://www.kaggle.com/stackoverflow/stackoverflow</a>.
We got the ideas from paper 'Self-training with Noisy Student improves ImageNet classification'. The teacher gives big size weak pesudo labels for the students.The students use the weak pesudo labels for train.Firstly we use one model to predict the psuedo labels for the unlabeled datas.
Then we use the psuedo labels to train our models. </p>
<p>For 6k pesuedo labels, we boost the lstm model lb score from 0.365 to 0.372. (private score from 0.332 to 0.343)
For 50k pesuedo labels, we boost the roberta base lb score from 0.405 to 0.415.
For 100k pesuedo labels, the roberta base lb score is still 0.415.</p>
<p>We do not have too much time for training powerful student models. </p>
<h1>5. Post-process for data</h1>
<p>It is the big magic for this competition.
Without post-process, our roberta large mode got the 0.411 lb score.
When we use the simple round post-process, it jumped 0.448 lb score. </p>
<p>Then we combine several optimization methods for post-process, these make roberta large jumped 0.46x lb score. Very big boosting!</p>
<p>Thanks my teemates again. You are genius.</p>
<h1>6. Ensemble</h1>
<p>We ensembled 6 models(roberta large, roberta base, bert base and xlnet. The best model is roberta base lb 0.415). We used post-process for them and got the final score.</p>
<h1>which do not work:</h1>
<ol>
<li>data argument.
We spend much time on it, but do not get boost.</li>
<li>loss funtion. We try mse loss, dice loss, focal loss, they do not boost our scores</li>
<li>Train one model for one target. </li>
<li>lstm+use. We can not get a high score for lstm+use. We want to see <a href="/sakami">@sakami</a> lstm solutions.</li>
</ol> |
Google QUEST Q&A Labeling | 23th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, thanks to my teammate @agatan .</p>
<h2>Pre-processing</h2>
<ul>
<li>apply html unescape to sentences</li>
<li>sentances longer than maximum length was trimmed at the head and tail parts</li>
</ul>
<h2>Modeling</h2>
<ul>
<li>We trained Question Bert-base model and Question and Anser Bert-base model separetely on kernel</li>
<li>Question model predicts question-targets and Q-A model predicts all targets</li>
<li>3fold and 3epochs with GroupKFold</li>
<li>BCE + margin ranking loss</li>
</ul>
<h2>Post-processing</h2>
<ul>
<li>used lightgbm as second-stage stacking model</li>
<li>max_depth=1 and lr=0.1 was best from my experiments</li>
<li>Input meta features such as text-length to lightgbm addition to predicted values</li>
</ul>
<h2>Best model</h2>
<ul>
<li>CV: 0.3851</li>
<li>Public: 0.45979</li>
<li>Private: 0.41440</li>
</ul>
<p>We shared our kernel, please see if you like.
<a href="https://www.kaggle.com/shuheigoda/23th-place-solusion">https://www.kaggle.com/shuheigoda/23th-place-solusion</a></p> |
Google QUEST Q&A Labeling | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to say thank you to the organizers and all teams, and congrats to all the winners.
Here is a brief summary of my solution.
- Solution overview
- Model architectures
- Training strategy
- Post-processing
- Blending strategy
- What didn't work for me</p>
<h2>Solution overview</h2>
<ul>
<li>text columns + category column</li>
<li>various text truncation
<ul><li>pre-truncate</li>
<li>post-truncate</li>
<li>head + tail tokens (<a href="https://arxiv.org/abs/1905.05583"></a><a href="https://arxiv.org/abs/1905.05583">https://arxiv.org/abs/1905.05583</a>)</li>
<li>assign longer max_len to answer</li></ul></li>
<li>ensemble of following models:
<ul><li>LSTM + Universal Sentence Encoder</li>
<li>BERT base uncased * 2</li>
<li>BERT base cased</li>
<li>BERT large uncased * 2</li>
<li>BERT large cased * 2</li>
<li>ALBERT base</li>
<li>RoBERTa base</li>
<li>GPT2 base</li>
<li>XLNet base</li></ul></li>
</ul>
<h2>Model architectures</h2>
<p>Surprisingly, stacking 2 Linear layers <strong>without</strong> activation performed better than single Linear layer in some cases. (I found it by a mistake. :P)
Some of the following models have category embedding structure additionally.</p>
<h3>LSTM model</h3>
<p>You can check my LSTM code <a href="https://www.kaggle.com/sakami/google-quest-single-lstm?scriptVersionId=28487242">here</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F5ba7064f215c31d377bb991275d5fa5b%2F2020-02-11%2017.29.36.png?generation=1581409799179315&alt=media" alt=""></p>
<h3>BERT models</h3>
<p>I used 2 types of model architectures.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Ff5da89f1cff533e174ae24d1cc459da2%2F2020-02-11%2017.14.05.png?generation=1581409945426377&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F80a15ecbc3f970aa7fad8308d01f2af1%2F2020-02-11%2017.14.17.png?generation=1581409964611182&alt=media" alt=""></p>
<h3>ALBERT model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F506270dad509f130a53c50d177bd4861%2F2020-02-11%2020.12.38.png?generation=1581419591299259&alt=media" alt=""></p>
<h3>RoBERTa model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Fd877c80ce341c7be39715eb3534627d9%2F2020-02-11%2017.14.26.png?generation=1581410441581392&alt=media" alt=""></p>
<h3>GPT2 model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Ff123dc23ada1495c68b5be43ccf0241a%2F2020-02-11%2017.14.44.png?generation=1581410465561097&alt=media" alt=""></p>
<h3>XLNet model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F483691fcbd3e72cb5a6053e5c8d4873a%2F2020-02-11%2017.14.55.png?generation=1581410490556290&alt=media" alt=""></p>
<h2>Training strategy</h2>
<ul>
<li>Min-Max target scaling
<code>
t_max = train_y.max(axis=0)[np.newaxis, :]
t_min = train_y.min(axis=0)[np.newaxis, :]
train_y = (train_y - t_min) / (t_max - t_min)
</code></li>
<li>Large weights for minor positive or negative samples
<code>
zero_inflated = (train_y &gt; 0).mean(axis=0) &lt; 0.1
positive_weighted = np.tile(zero_inflated, (len(train_y), 1))
positive_weighted *= (train_y &gt; 0)
one_inflated = (train_y &lt; 1).mean(axis=0) &lt; 0.1
negative_weighted = np.tile(one_inflated, (len(train_y), 1))
negative_weighted *= (train_y &lt; 1)
train_weights = np.where(positive_weighted + negative_weighted, 2., 1.)
</code></li>
<li>gelu_new activation for BERT-based models</li>
<li>cosine warmup scheduler</li>
<li>EMA</li>
</ul>
<h2>Post-processing</h2>
<p>I just clipped predictions. The thresholds are decided by golden section search.
This improve my score from 0.4434 to 0.4852. (CV score)</p>
<p>before post-processing scores:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Ff8ebe510f4885db659da10499a79daea%2Fnewplot.png?generation=1581416684489031&alt=media" alt=""></p>
<p>after post-processing scores:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F04f51a001c9f734fed93003cfe8e935b%2Fnewplot%20(1" alt="">.png?generation=1581416716829926&alt=media)</p>
<p>```
class OptimizedRounder(BaseEstimator, TransformerMixin):</p>
<pre><code>def __init__(self):
self.threshold = [0., 1.]
self.ab_start = [(0., 0.2), (0.8, 1.)]
def fit(self, train_labels, train_preds):
assert train_labels.shape == train_preds.shape
assert train_labels.ndim == 1
self.best_score = self.score(train_labels, train_preds)
self._golden_section_search(train_labels, train_preds, 0) # lower threshold
score = self.score(train_labels, train_preds)
if score &gt; self.best_score + 1e-3:
self.best_score = score
else:
self.threshold[0] = 0.
self._golden_section_search(train_labels, train_preds, 1) # higher threshold
score = self.score(train_labels, train_preds)
if score &gt; self.best_score + 1e-3:
self.best_score = score
else:
self.threshold[1] = 1.
def _golden_section_search(self, train_labels, train_preds, idx):
# idx == 0 -&gt; lower threshold search
# idx == 1 -&gt; higher threshold search
golden1 = 0.618
golden2 = 1 - golden1
for _ in range(10):
a, b = self.ab_start[idx]
# calc losses
self.threshold[idx] = a
la = -self.score(train_labels, train_preds)
self.threshold[idx] = b
lb = -self.score(train_labels, train_preds)
for _ in range(20):
# choose value
if la &gt; lb:
a = b - (b - a) * golden1
self.threshold[idx] = a
la = -self.score(train_labels, train_preds)
else:
b = b - (b - a) * golden2
self.threshold[idx] = b
lb = -self.score(train_labels, train_preds)
def transform(self, preds):
transformed = np.clip(preds, *self.threshold)
if np.unique(transformed).size == 1:
return preds
return transformed
def score(self, labels, preds):
p = self.transform(preds)
score = scipy.stats.spearmanr(labels, p)[0]
return score
</code></pre>
<p>```</p>
<h2>Blending strategy</h2>
<p>I used TPE optimization, same method as <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/97471#562472">previous solution</a>.</p>
<p>Blending weights:
<code>
test_preds = lstm_test_preds * 0.1145 + \
bert_uncased_test_preds * 0.0112 + \
bert_uncased_2_test_preds * 0.0911 + \
bert_cased_test_preds * 0.0446 + \
bert_large_uncased_test_preds * 0.1670 + \
bert_large_uncased_2_test_preds * 0.0487 + \
bert_large_cased_test_preds * 0.1094 + \
bert_large_cased_2_test_preds * 0.1369 + \
gpt2_test_preds * 0.0478 + \
albert_test_preds * 0.0120 + \
xlnet_test_preds * 0.1607 + \
roberta_test_preds * 0.0560
</code></p>
<h2>What didn't work for me</h2>
<ul>
<li>Pre-training
<ul><li>masked LM</li>
<li>input-response prediction (<a href="https://arxiv.org/abs/1705.00652"></a><a href="https://arxiv.org/abs/1705.00652">https://arxiv.org/abs/1705.00652</a>)</li></ul></li>
<li>Data augmentation
<ul><li>inverse translation (<a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038"></a><a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038">https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038</a>)</li>
<li>manifold mixup</li>
<li>up-sample minor labels</li></ul></li>
</ul> |
Google QUEST Q&A Labeling | 162nd solution using only BERT | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the winners , this was our first NLP competition we tried below things:</p>
<ol>
<li>At first we build a simple model using <a href="https://www.kaggle.com/mobassir/jigsaw-google-q-a-eda">https://www.kaggle.com/mobassir/jigsaw-google-q-a-eda</a></li>
<li>We tried to tune the hyper-parameter but that didn't had much effect.</li>
<li>Since we were concatenating Title, Body and Answer using 512 chars which cant understand the context , we tried to build model using word embeddings of GLove and Fasttext but didn't had much accuracy.</li>
<li>Then we tried to create separate embedding for Title + Body and answer with additional metadata which worked.</li>
<li>We finally created Title + Body and answer with additional metadata and BERT with concatenating title , body and answer with 10 folds.</li>
</ol>
<h3><strong>Ensemble of all the outputs gave us 0.4003 on LB.</strong></h3>
<p><strong>What didn't work for us:</strong></p>
<ol>
<li>We tried to trim Title , body and answer - Many has done that to understand the context i feel we did something wrong here.</li>
<li>We tried to create text summerisation of Body and Answer but the Rho remained the same and we didn't explore much of this idea.</li>
<li>We tried to replace last layer as regression and loss function as MSE , we could have explore this idea a bit more as many top scorer has done some kind of post processing.</li>
</ol>
<p>*<em>Thing we should have done *</em></p>
<ol>
<li>Pre-Processing into the data like adding category column combining with title (Don't know how i missed this one)</li>
<li>Post processing of the data to adjust more to "Human level scores"</li>
<li>Build more models like XLNet , Ro-BERT</li>
<li>We tried to use LAMB instead of ADAM but due to an error we couldn't progress. (TF 2.x :) )</li>
</ol>
<p>We missed the medal by 5 ranks , hopefully we can do better next time. And not to forget thank you @vinaydoshi for continuous encouraging on working on this problem.</p>
<p><strong>If anyone is interested here is our notebook.</strong></p>
<p>10 fold BERT concatenating title , body and answer : <a href="https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-8-fold">https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-8-fold</a> </p>
<p>Hugging face BERT with Title + Body and Answer with additional meta-data :<a href="https://www.kaggle.com/vinaydoshi/tfbert-ensemble-preprocess-v1?scriptVersionId=28236333">https://www.kaggle.com/vinaydoshi/tfbert-ensemble-preprocess-v1?scriptVersionId=28236333</a></p>
<p>BERT as regression : <a href="https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-bert-regression?scriptVersionId=26800228">https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-bert-regression?scriptVersionId=26800228</a></p> |
Google QUEST Q&A Labeling | 26 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all winners! And thanks to organizers!
The competition was very very difficult for me, but fortunately I got 26th place.
I share my gotten knowledge.</p>
<h3>modeling</h3>
<p>I used a huggingface transformer and customized the head of the model.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1483555%2F7d88db1391baa10f5ce96f74490d2bd1%2F1.png?generation=1581475630913742&alt=media" alt=""></p>
<p>As you know, I imitated the first solution of the <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/103280">jigsaw</a>.</p>
<p>BERT-base-cased, BERT-base-uncased and XLNet-base-case with this header were weighted at 2:4:4.</p>
<h3>Parameter</h3>
<ul>
<li>10 GroupFold</li>
<li>epoch: 3</li>
<li>batch size: 8</li>
<li>optimizer: AdamW
<ul><li>OneCycleLR</li>
<li>max_lr=5e-5 (decay*0.9)</li></ul></li>
<li>loss: BCE + MSE(MSE weighted by label frequency)</li>
</ul>
<h3>Token</h3>
<p>I add category token. like below;</p>
<p><code>
[CLS] [CATEGORY_TOKEN] question_title [SEP] question_body [SEP] ansewr [SEP]
</code>
It raised 0.05 my CV.</p>
<h3>Optimize binning</h3>
<p>I read <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/119177">this Discussion</a>.
I calculated the common denominator and binned the predicted values.
Optimized binning with features like <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107">OptimizeRounder</a>.</p>
<h3>not work for me</h3>
<ul>
<li>pseudo label</li>
<li>MultiSample Dropout</li>
<li>SWA</li>
<li>AdaBound</li>
<li>FocalLoss</li>
<li>RoBERTa, ALBERT</li>
<li>Prediction with Special Token for each target</li>
</ul>
<h3>my code</h3>
<ul>
<li>train: <a href="https://github.com/trtd56/KaggleQuest">https://github.com/trtd56/KaggleQuest</a></li>
<li>predict : <a href="https://www.kaggle.com/takamichitoda/26th-place-solution?scriptVersionId=28524791">https://www.kaggle.com/takamichitoda/26th-place-solution?scriptVersionId=28524791</a></li>
</ul>
<p>Thank you!!</p> |
Google QUEST Q&A Labeling | 12th place post-processing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>```
raters = np.array([18, 18, 6, 6, 6, 6, 18, 18, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 3, 18, 18, 18, 18, 18, 90, 6, 6, 6, 18])</p>
<p>mins = np.min(all_preds,axis=0)
maxs = np.max(all_preds,axis=0)
all_preds = (all_preds - mins)/(maxs - mins)</p>
<p>all_preds = np.round(raters*all_preds).astype(np.float)/raters</p>
<p>```
If you have time please try this on your submission, I'm really interested to know how much it can boost your own score.</p> |
Google QUEST Q&A Labeling | 21st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I want to thanks my teammates <a href="/brightertiger">@brightertiger</a> <a href="/khyeh0719">@khyeh0719</a> <a href="/arvissu">@arvissu</a> <a href="/qbenrosb00">@qbenrosb00</a>, we have great works. And thanks google and kaggle to host the good competition. I will briefly summarize our methods</p>
<h2>Preprocessing</h2>
<p>We tried some preprocessing before trained BERT model and found the CV and LB are similar to no preprocessing. For the clean code, we didn't do any preprocessing to train SoTA models. Moreover, We used head and tail part of the texts as model input.</p>
<h2>Input</h2>
<p>As other teams, We train two BERT models for question labels(21) and answer labels(9).
* Question : <code>[cls]+ title+[sep]+question+[sep]</code>
* Answer : <code>[cls]+title+[sep]+question+[sep]+answer[sep]</code></p>
<h2>Modeling</h2>
<p>We tried BERT with different architectures.
* Vanilla BERT: 0.35x LB
* <a href="https://www.kaggle.com/m10515009/customizedbert-pytorch-version-training">Customized BERT+head and tail part of the texts:</a> 0.392 LB
* Separate two BERT for question and answer: 0.396 LB
* Separate two BERT for the question and answer with special tokens: CV 415 LB 0.405</p>
<p>We find out using two BERT and special token get the best result on LB.
Therefore we apply the method to train GPT2 and RoBERTa.</p>
<p>We also used BERT outputs as embedding to train the LSTM and got great results
CV: 0.418, LB: 0.396</p>
<p>The summary of models we tried
* BERT-base CV 415 LB 0.405
* RoBERTa CV 0.411 LB 0.398
* GPT2 CV: 0.418 LB: 0.396
* BERT-large
* BERT-large-wwm
* BERT-large-wwm-squad
* BERT-RNN CV: 0.418 LB: 0.396
* Pretrained Embedding + NN
* RNN+USE</p>
<h2>Ensemble</h2>
<p>The models we used for ensemble are BERT, RoBERTa, GPT2, BERT-RNN. We only used simple average for ensemble. </p>
<h2>Postprocessing</h2>
<p>We only used the threshold method for postprocessing and got ~0.004 improvement in LB. I don't like the postpocessing method. However, in this competition, <strong>postprocessing is all you need</strong>.</p>
<p>```
def postProcessing(x):</p>
<pre><code>x = np.where(x&gt;=0.9241, 1.0, x)
x = np.where(x&lt;=0.0759, 0.0, x)
return x
</code></pre>
<p>targets = ['question_conversational',
'question_type_compare',
'question_type_consequence',
'question_type_definition',
'question_type_entity',
'question_type_choice']</p>
<p>sub.loc[:, targets] = postProcessing(sub.loc[:, targets].values)
```</p>
<h2>Not work for us</h2>
<ul>
<li>BERT-large</li>
<li>Pretained BERT-base by SQuAD2 dataset.</li>
<li>Gradient accumulation </li>
</ul> |
Google QUEST Q&A Labeling | 16th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First I would like to thank my teammates <a href="/abhishek">@abhishek</a> , <a href="/atsunorifujita">@atsunorifujita</a> , <a href="/pvduy23">@pvduy23</a> for a very good team work.</p>
<h2>Solution Highlights</h2>
<h2>Preprocessing</h2>
<p>We didn't really do any preprocessing to the text, we tried some cleaning which were aimed especially to make the parts with code more readable (shorter) but it didn't make any difference.</p>
<h3>Loss function</h3>
<p>I'm surprised I didn't see more referance to this in the winning posts, but here we did a little trick. Instead of using the target columns as-is we first ranked and normalized them to [0,1] and then used them as targets, something like
> target = scipy.stats.rankdata(target,'average')
target = (target-target.min())/ (target.max()-target.min())</p>
<p>(I saw CV improvement of about 0.05 - 0.01)
As the loss itself I used MSE and some other teammates used MSE+BCE</p>
<h2>Models</h2>
<p>The models we used where RoBERTa-Large, XLNet, BERT-base-uncased.
We also tried some other models, like BERT-large, RoBERTa-base, XLMRoBERTa and others.
(Not choosing XLMRoBERTa was a mistake - will be discussed later)
We also tried to pre-train and fine-tune bert-base on stackexchange corpus, but it didn't seem to improve CV.</p>
<p>To handle long text, I used the full <code>question-title</code> + equal parts from <code>question-body</code> and <code>answer</code>, truncated by taking the beginning and the end.</p>
<p>We 5 folds for every model</p>
<h3>Augmentation</h3>
<p>I Tried several augmentations:
* replacing random tokens with [PAD] or with another token - 20% [PAD} seems to give nice results
* Translating to German and Back - didn't work (Russian also didn't work)
* Truncating random parts when the text is too long - didn't do any difference</p>
<h2>Post - Processing</h2>
<p>Postprocessing was very important in this competition due to the special characteristics of spearman correlation</p>
<h3>Spearman</h3>
<p>We ended up with clipping some of the output columns. The columns we clipped where columns were most of the targets values were 0 or where most of them where 1. The clipping threshold value was chosen to preserve the same non zero values in test as there are in train. This method improved CV and LB by ~0.03-0.04</p>
<h3>Question-Type-Spelling</h3>
<p>As the number of unique non-zero examples for this class is very low, we understood no ML model could really do a reliable work here. So we tried some heuristics - we ended -up with setting all question where the host was 'english.stackexchange.com' or 'ell.stackexchange.com' to 1.
We also tried a more ambitions trick, by selecting different key words, but this didn't work on Private LB.</p>
<h2>Stacking</h2>
<p>We tried different types of stacking and averaging. At the end it looks like simple averaging was as good as any other method.</p>
<h2>Where did we fail?</h2>
<ul>
<li><p>We didn't use enough models - we used 5 folds for every model which limit the amount of different models we could run in 2h, for this reason we (I !) neglected some good models like XLMRoBERTa, which gave good results, but didn't seem better then other models. The best practice if probably more diversified models, and dropping some of the folds from each model.</p></li>
<li><p>CV - It might be because of the spearman metric or because the training set was too small for 30 classes, or we did something wrong but the CV wasn't reliable enough for us (compared to itself, Public LB and Private LB), which meant we didn't do the right choices in selecting models and parameters</p></li>
</ul>
<h3>My Last observation</h3>
<p>At some points in this competition it seems like the competition is not about NLP but about tricks. <strong>This is a mistake. This competition is about NLP</strong> and eventually tricks were only a small part of the full picture, and the winning teams were the teams with the best models</p>
<p><strong>As always I want to thank Kaggle and the competition team for a well organized competition.</strong> </p> |
Google QUEST Q&A Labeling | Two BERTs are better than one (2nd place solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the sponsors and kaggle for hosting such an interesting and challenging competition! Our work is an amazing team effort and sprint over a very short time period of two weeks during which we made progress each day. <br>
Why two weeks? Because some of us just finished the TF2 QA competition and decided to reuse whatever NLP knowledge that was acquired there. Thanks to all teammates: @christofhenkel @cpmpml @philippsinger @dott1718 @maxjeblick. </p>
<p><strong>Please</strong> also give them upvotes, we made a lottery roll of who will post the solution and everyone contributed equally to it.</p>
<p>Our only regret is that the final score depends too heavily on the question_type_spelling column that has only 11 non zero target values in train data. Late submission experiments show we could easily improve, or degrade, our final rank by little tweaks on how we handle this column.</p>
<h2>Brief Summary</h2>
<p>Like most teams, we used pretrained models from huggingface implemented in pytorch. Our final blend consists of 5 transformer models, that have several differences to push diversity. However, they all have in common to use 2 separate inputs of up to 512 tokens length representing questions respectively answers. These 2 inputs go either through the same transformer or through 2 separate transformers. We changed the targets to an ordinal representation (setting >target to 1) resulting in 170 target columns with each original column having n-unique target columns. Predictions were then generated by calculating the expected value.</p>
<p>Most models were trained with differential learning rate, where the transformer gets a lr of 3-e5 and the model head(s) a lr of 0.005 and used a cosine schedule for training 3 epochs including warmup over one epoch. Optimizer were either AdamW or RAdam with weight decay of 0.01 using an effective batchsize of 8 (gradient accumulation).</p>
<p><strong>Dual roberta-base:</strong>
- double roberta-base
- added question title as input to answer-transformer
- individual head for each target
- mean of last layer output</p>
<p><strong>Dual roberta-base:</strong>
- double roberta-base
- added question title as input to answer-transformer
- individual head for each target
- mean of last layer output</p>
<p><strong>Siamese roberta-base with softmax layer weight:</strong>
- double roberta-base
- added question title as input to answer-transformer
- individual head for each target
- mean of last layer output</p>
<p><strong>Dual roberta-large:</strong>
- only use 256 tokens for question / answer
- double roberta-large
- only fits in memory with batchsize 1 and fp16</p>
<p><strong>Dual xlnet:</strong>
- same as dual roberta-base but with xlnet-base backbone
- We used a simple average of probabiity predictions to ensemble our models. </p>
<h2>Longer Summary</h2>
<h3>Binary Encoded Targets</h3>
<p>Given the metric is rank-based, and given targets are not binary, it seemed important to be able to predict values that are neither 0 or 1 correctly. We tried mse loss and other variants but results were not satisfactory. We then decided to use binary cross-entropy with binary targets. The first try was to use on hot encoding of targets, given that targets have a small number of distinct values. This wasn’t satisfactory either, because this representation loses the ordering of values.
We ended up using an encoding of the form <code>(t > v)</code> for all v values of target t, except last value. For instance, if a target t has unique values <code>[0, ⅓, ⅔, 1]</code> then we would get 3 binary targets: <code>t > 0, t> ⅓</code>, and <code>t > ⅔</code>. Assuming we get perfect probability predictions <code>p(0), p(⅓ )</code>, and <code>p(⅔)</code> for the binary targets, we can compute predictions for each value of the original target as:
<code>t(0)= 1 - p(0)
t(⅓)= p(0) - p(⅓)
t(⅔)= p(⅓) - p(⅔)
t(1)= p(⅔) - 0</code>
And the original target value as
<code>t = 0 * t(0) + ⅓ * t(⅓) + ⅔ * t(⅔) + 1*t(1)</code>
We keep that computation with the actual predictions <code>p(0), p(⅓ )</code>, and <code>p(⅔)</code> for the binary targets. <br>
When the values are evenly spaced, as in this example, then the formula simplifies into:
<code>t = mean(p(0), p(⅓ ), p(⅔))</code>
If we assume that what matters is only the order of values, then we can stick to this simplified form.
We used the simplified form throughout the competition as it was simpler to code, and faster. In our final two selected subs we reverted to the exact computation because it was better on both cv and public LB, but this didn’t change private LB significantly (0.0004 difference). </p>
<h3>Validation scheme</h3>
<p>Similar to most teams, we use a 5-fold GroupKFold on question body for fitting our models and evaluation. After a bunch of experiments we saw though, that this is not sufficient mainly due to the following aspects:
- Test data is different to training data in the sense that it only has one question-answer pair sampled out of a group of questions in train data. There can be stark noise for labels of the same question, which is why this needs to be addressed robustly.
- There are a few columns with very rare events and also a lot of noise within those rare events. The prime example is spelling which has a huge impact on CV and LB, but can completely blind your view when trying to judge the overall strength of your model.</p>
<p>That is why we settled after some time on the following full validation scheme:
- Use 5-fold GroupKFold on question body
- For each validation fold, sample 100 times randomly a single question-answer pair out of multiple questions.
- Calculate the median score across these 100 samples and report.
- Ignore spelling column.
- Final CV is a mean of 5 folds.
This setup also allowed us to properly test any type of postprocessing in a realistic manner as described next.</p>
<h3>Postprocessing</h3>
<p>Even though we had really strong models, post-processing was very important to us and we spent significant time on it to find a robust way that we can trust. It is based on threshold clipping.
For each target column, we separately attempt to find optimal thresholds from both sides (starting from lowest predictions and highest predictions) based on which we clip the data. So let’s assume our predictions look like <code>x=[0.01, 0.015, 0.02, 0.03, 0.04]</code> and our optimal thresholds are <code>coefs=[0.016, 0.029]</code> then we would clip the data with <code>np.clip(x, coefs[0], coefs[1])</code> leading to <code>x=[0.016, 0.016, 0.02, 0.029, 0.029]</code> effectively generating ties at the edges.</p>
<p>The CV setup from above gave us a perfect way to fully validate any of our approaches by simply generating the thresholds on training folds, and applying them on truncated samples of the validation folds. For final PP, we get the thresholds on full oof and apply them on the test set.
In a nutshell, our routine looks like the following:
- Sample 1000 times single question-answer pairs from multiple questions
- Generate thresholds that optimize the median score of all 1000 samples</p>
<p>We tested quite a few different strategies, but this one was the most robust one we found and we are quite happy with it. Only for spelling, it was still a bit shaky, which is why our two final subs just differ in how we handle spelling column, one uses the calculated thresholds, and one hard-sets the 6 highest predictions in private LB to 1 and rest to 0 which is based on experiments on samples.
Unfortunately, if we would have post-processed all columns except spelling and would have kept spelling as is, we would have reached 0.432 on private LB as apparently spelling is very differently distributed on private. No CV experiment would have let us make this decision though. It would have been so much better though to not include this column into the competition.</p>
<h3>Architectures and training models</h3>
<p>Similar to other recent competitions at first it was quite difficult to beat public kernels, which we normally see as a kind of baseline. All “normal” tricks that worked in past computer vision or other NLP competitions (e.g. concat of Max and Mean pooling) did not improve cv. Also using a sliding window approach to capture more text did not work.
We saw a first big improvement when using 2 transformers instead of one (+0.01), but still, any slightly more complex architectures lead to a worse result. That changed when we tried to freeze the transformers and only train the model head for 1 epoch, before fine-tuning. With this approach, we were able to try more fancy things and that is also how we finally came to the two main architectures.
We further improved than this 2 step approach by using different learning rates for transformer and head, together with a warm up schedule, which enabled us to get rid of the freezing step in general. We illustrate the two main architectures in the following figure (For the sake of simplicity we show the architecture for the original 30 targets. It was adapted to work with the binarized targets.).
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1435684%2F0bb9b707b885e27cb1a09b0df625a0dd%2FBildschirmfoto%202020-02-11%20um%2012.11.51.png?generation=1581431173679092&alt=media" alt=""></p>
<h3>Dual Transformer:</h3>
<p>The upper architecture shows the dual transformer model, a combo model where one transformer handles the question text, while a second transformer handles the answer text. the output of the last layer is then averaged over the 512 tokens and both resulting tensors are concatenated. The resulting representation is then fed into 30 little 2 fully connected layers which result in the target predictions. 30 little heads enable each target to have its own head and gain individuality.</p>
<h3>Siamese Transformer with soft weighted layers:</h3>
<p>It is inspired by the <em>Elmo paper</em>, where the final embedding representation is a weighted average of all LSTM layers. We use the output of every layer of a single transformer model in which we put 512 question and 512 answer tokens. For roberta-base that will give us twelve 512x768 tensors (for roberta large it would be 24 512x1024 tensors). We then average over the 512 tokens for each layer which results in twelve 768 representations. We then take a weighted sum of these 12 representations (where the weights for adding the representations are trainable!). This results in a 768 representation. The weighted average of all layer outputs enables to capture low level features in the final representation which was quite important for some answer related targets. Finally, we add a single prediction head for getting our targets. </p>
<p>Our final ensemble contains 5 models which belong to the one or the other architecture, and only differ in used pretrained backbone.
We ended up with
- 2x dual roberta-base
- dual roberta-large (2x 256 tokens)
- dual xnet-base
- siamese roberta-large with weighted averaged layers
as this combination had the best cv score while fitting in the 2h kernel runtime requirement. </p>
<p>We even trained one dual roberta-large model which on a V100 can only be trained when using a batch size of 1 and fp16. and was our best single model. Although this model could not fit into the ensemble due to runtime reasons, the promising cv results let us to a daring experiment: reducing the number of tokens used. There we saw something quite surprising. We could reduce the number of tokens to 256 for question as well as for answer without losing much quality. </p>
<p>Most models were trained with differential learning rate, where the transformer gets a lr of 3-e5 and the model head(s) a lr of 0.005 and used a cosine schedule for training 3 epochs including warmup over one epoch. Optimizer were either AdamW or RAdam with weight decay of 0.01 using an effective batchsize of 8 (gradient accumulation).</p>
<h3>Blending</h3>
<p>Our submission includes an equally weighted blend between all of above mentioned 5 models (leading to 25 test predictions with 5-fold). The blend is conducted on raw expected values calculated as described above. We experimented quite a bit with ranked blending as it seemed natural, but results were always worse.
Our final blend is both our best one on CV, public LB and private LB meaning that we had a robust CV setup in the end and our selection was solid.</p>
<h3>Wrapping things up and putting into kernel</h3>
<p>Training was conducted offline and inference in kernel. Our final runtime was close to two hours and we had to spend some efforts to squeeze the 5 models in. All our models are implemented in Pytorch using the amazing Huggingface library. Fitting was done either locally or on cloud providers. </p>
<p>Thanks for reading.</p> |
Google QUEST Q&A Labeling | 30th Sliver Solution (Post-Processing Magic Discovered) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners! Learnt more about BERT-based model architectures in this competition, and would like to share some of my takeaways.</p>
<h2>Text Preprocessing</h2>
<p>I only joined the competition in the last two weeks, therefore I didn't spend too much time on the preprocessing step, but some simple tricks did improve the score. Assume that the built-in tokenizers from pretrained BERTs could do a good job for further preprocessing.</p>
<ul>
<li>Remove extra white spaces to make text more dense</li>
<li>Unescape html entities, like <code>& lt;</code>, <code>& equals;</code>, <code>& gt ;</code>, ...</li>
<li>Extract plain text from html tags if found (using <code>get_text()</code> of BeautifulSoup)</li>
<li>Remove words starting with <code>\</code> and <code>$</code> (mostly for Latex keywords and some syntax keywords)</li>
</ul>
<h2>Custom BERT Models for Ensembles</h2>
<p>As a fan of ensemble learning, I always believe that the <strong>model diversity</strong> is the key for success. I trained 5 BERT-based models with slightly different custom layers, but they share the same structure for BERT embeddings: <code>[CLS] title [SEP] question [SEP]</code> for question text and <code>[CLS] answer [SEP]</code> for the answer text. I only used the last hidden state output embeddings of <code>[CLS]</code> token from BERT models to combine with other layers (pooler output performed worse).</p>
<ul>
<li><p><strong>Topology 1: Roberta-Base, Xlnet-Base-Cased</strong></p>
<ul><li>2 BERT embeddings (CLS): q_embed, a_embed</li>
<li>3 categorical embeddings: Concat(cate_embeds) -> Dense(128, relu)</li>
<li>2 separate FC layer paths
<ul><li>Concat(q_embed, cate_embeds_dense) -> Dense(256, relu) -> Dense(21, sigmoid)</li>
<li>Concat(a_embed, cate_embeds_dense) -> Dense(256, relu) -> Dense(9, sigmoid)</li></ul></li></ul></li>
<li><p><strong>Topology 2: Roberta-Base, Bert-Base-Uncased, Bert-Base-Cased</strong></p>
<ul><li>2 BERT embeddings (CLS): q_embed, a_embed</li>
<li>2 separate FC layer paths
<ul><li>q_embed -> Dense(256, relu) -> Dense(21, sigmoid)</li>
<li>a_embed -> Dense(256, relu) -> Dense(9, sigmoid)</li></ul></li></ul></li>
</ul>
<p>I also discovered that splitting questions and answers into two separate fully-connected layer paths works better than mixing both. It makes sense to me as the labeling of classes by the voters may focus on the content of <code>title+question</code> and <code>answer</code> separately. Categorical embedding layers for <code>host</code>, 1st token of <code>host</code> and <code>category</code> columns contributed the ensemble score.</p>
<p>The learning rate of all models is fixed to <code>2e-5</code>, also applied <code>ReduceLROnPlateau</code> for LR decay (factor=0.1) and a custom early stopping callback based on validation Spearman score.</p>
<p>The final model is a weighted average of those models with a post processing to optimize ranks.</p>
<h2>Determinism on TensorFlow 2.1</h2>
<p>Reproducibility had been an issue for tensorflow/keras, but this repo from Nvidia helped me to control the determinism to a great deal! Now we can get almost the same result in multiple runs using the same random seed.
This gives us a clear view about the relative performance of all experiments, and then we can gradually improve the models by the right setup and approaches.
<a href="https://github.com/NVIDIA/tensorflow-determinism">https://github.com/NVIDIA/tensorflow-determinism</a></p>
<h2>Post Processing Magic</h2>
<p>Lost of people were discussing about what is the actual trick/magic that can boost the Spearman correlation score. I was originally having no clues about it, but after studying the definition of Spearman correlation and the patterns inside the training set labels, I discovered that we could utilize fixed percentiles of label values to approximate to the optimal rank in each class.</p>
<p>I searched from 1 to 100 as the divisor for fixed percentile intervals using out-of-fold prediction from one of the best ensembles. I finally chose 60 as the fixed divisor because it consistently boosted the score on both local CV and public LB (+~0.03-0.05).</p>
<p>The code is very simple, given the unique labels of training set as the distribution samples:
```
y_labels = df_train[output_categories].copy()
y_labels = y_labels.values.flatten()
unique_labels = np.array(sorted(np.unique(y_labels)))
unique_labels</p>
<p>array([0. , 0.2 , 0.26666667, 0.3 , 0.33333333,
0.33333333, 0.4 , 0.44444444, 0.46666667, 0.5 ,
0.53333333, 0.55555556, 0.6 , 0.66666667, 0.66666667,
0.7 , 0.73333333, 0.77777778, 0.8 , 0.83333333,
0.86666667, 0.88888889, 0.9 , 0.93333333, 1. ])
```</p>
<p>I created 60 optimal percentiles:
```
denominator = 60
q = np.arange(0, 101, 100 / denominator)
exp_labels = np.percentile(unique_labels, q)
exp_labels</p>
<p>array([0. , 0.08 , 0.16 , 0.21333333, 0.24 ,
0.26666667, 0.28 , 0.29333333, 0.30666667, 0.32 ,
0.33333333, 0.33333333, 0.33333333, 0.34666667, 0.37333333,
0.4 , 0.41777778, 0.43555556, 0.44888889, 0.45777778,
0.46666667, 0.48 , 0.49333333, 0.50666667, 0.52 ,
0.53333333, 0.54222222, 0.55111111, 0.56444444, 0.58222222,
0.6 , 0.62666667, 0.65333333, 0.66666667, 0.66666667,
0.66666667, 0.68 , 0.69333333, 0.70666667, 0.72 ,
0.73333333, 0.75111111, 0.76888889, 0.78222222, 0.79111111,
0.8 , 0.81333333, 0.82666667, 0.84 , 0.85333333,
0.86666667, 0.87555556, 0.88444444, 0.89111111, 0.89555556,
0.9 , 0.91333333, 0.92666667, 0.94666667, 0.97333333,
1. ])
```</p>
<p>And a mapping function to align BERT outputs to the closest percentile value.
```
def optimize_ranks(preds, unique_labels):
new_preds = np.zeros(preds.shape)
for i in range(preds.shape[1]):
interpolate_bins = np.digitize(preds[:, i],
bins=unique_labels,
right=False)</p>
<pre><code> if len(np.unique(interpolate_bins)) == 1:
# Use original preds
new_preds[:, i] = preds[:, i]
else:
new_preds[:, i] = unique_labels[interpolate_bins]
return new_preds
</code></pre>
<p>weights = [1.0, 1.0, 1.0, 1.0, 1.0]
oof_preds = val_ensemble_preds(all_val_preds, weights)
magic_preds = optimize_ranks(oof_preds, exp_labels)
blend_score = compute_spearmanr(outputs, magic_preds)
```</p>
<p>The Spearman correlation will become NaN if the output column contains 1 unique value, because in this case the standard deviation will be zero and caused divide-by-zero problem (submission error). The trick I used is to use original predictions from BERT models for that column.</p>
<p>Here is a summary table of original scores versus magic-boosted scores:
| Model | Local CV without Magic | Local CV with Magic | Public LB with Magic | Private LB with Magic |
|------------------------|-----------------------:|--------------------:|---------------------:|----------------------:|
| Roberta-Base (T1) | 0.395972 | 0.414739 | 0.43531 | 0.40019 |
| Xlnet-Base-Cased (T1) | 0.392654 | 0.407847 | 0.42771 | 0.39609 |
| Roberta-Base (T2) | 0.398664 | 0.422453 | 0.43522 | 0.40242 |
| Bert-Base-Uncased (T2) | 0.389013 | 0.398852 | 0.41844 | 0.39075 |
| Bert-Base-Cased (T2) | 0.387040 | 0.400199 | 0.42026 | 0.38455 |
| Final Ensemble | 0.392669 | 0.438232 | 0.44238 | 0.41208 |</p>
<h2>Things That Didn't Work for Me</h2>
<p>They produced worse results on both local CV and LBs
- SpatialDropout1D for embeddings and Dense dropouts
- Separate BERT embeddings for title and question
- Batch normalizations for embeddings and dense layers</p>
<h2>Source Code</h2>
<p><strong>Final Submission Inference Code:</strong>
<a href="https://www.kaggle.com/markpeng/ensemble-5models-v4-v7-magic">https://www.kaggle.com/markpeng/ensemble-5models-v4-v7-magic</a></p>
<p><strong>Full Code:</strong>
<a href="https://github.com/guitarmind/kaggle_google_quest_qa_labeling">https://github.com/guitarmind/kaggle_google_quest_qa_labeling</a></p> |
TensorFlow 2.0 Question Answering | 30th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone.</p>
<p>This was my first real experience in kaggle.</p>
<p>Briefly, I want to say that I'm very glad that my team achieved 30th place in this competition. I want to thank my teammates <a href="/xiaokangwang">@xiaokangwang</a> @xiao-xiao for all the help and work during this competition. Also thanks for all the answers given during the competition explaining everything and giving awesome ideas!</p>
<p>Without further ado, the solution mainly focused on three things:
-> Fine-tuning Albert Large
-> DataAugmentation
-> Understanding thresholds.</p>
<p>The model used was <strong>Albert xLarge version 2</strong> trained on SQUAD2.0 and Fine-tuned on the tiny-dev. One of the important things was the <strong>DataAugmentation</strong> used for training, changing the document_text replacing words by synonyms using WordNet corpus.</p>
<p><img src="https://i.imgur.com/IrA2CmI.png" alt="Synonyms Examples"></p>
<p>In here there wasn't no much of magic. We also tried xxLarge only able to run with a huge doc_stride giving bad results. </p>
<p>Removing html tags helped as well.</p>
<p>Understand the data and the output of the model was an important step.
The long_answers appeared 50% of the times
The short_answers appeared 35% of the times.</p>
<p><img src="https://i.imgur.com/eHTopaB.png" alt="Output score"></p>
<p>In this regard, we used threshold values based on the outputs using 50% of all the results for the long_answers e 35% for the short_answers. This was achieve by ordering the list of results, select the number that was in the middle and for the short_answer, 35%. For example, given an output list [1,2,3,4,5] we selected the threshold 3 for the long_answer and 2 for short_answer.</p>
<p>I hope this gives good ideas for future work. </p>
<p>Best Regards,
Pedro Azevedo</p> |
Google QUEST Q&A Labeling | 5th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I want to thank my teammates here.I will briefly introduce our solution here.</p>
<h2>models</h2>
<p>1.Model structure。we design different models structure. We mainly refer to the solution of ccf internet sentiment analysis,concat different cls embedding . here it is the link <a href="https://github.com/cxy229/BDCI2019-SENTIMENT-CLASSIFICATION">BDCI2019-SENTIMENT-CLASSIFICATION</a> <br>
2.We found 30 labels through analysis, one is the question-related evaluation, and the other is the answer-related evaluation. In order to make the model learn better, we have designed the q model to remove the problem-related label and the a model to process the answer Related labels。 it is better than qa models. <br>
3.different model test. roberta base >roberta large >xlnet base >bert base > t5 base. </p>
<h2>Post-processing</h2>
<ol>
<li>Analysis and evaluation methods and competition data,we use 0,1 reseting way. it improve lb 0.05 or more. </li>
</ol>
<h2>Features</h2>
<p>1.we want that our model learns features that are not only considered in text, so we add host and
category embeeding features annd other Statistical Features。 it improve both cv and lb about 0.005. </p>
<h2>Text clean</h2>
<p>1.We also did text cleaning to remove stop words and some symbols, it improve about 0.002 </p>
<h2>Stacking</h2>
<p>1.Our Best Private model scored 0.42787 ,but we dont't select it. it is stacking by roberta large and roberta base and xlnet base. <br>
<code>blend.loc[:,targets] = roberta_large_oof_test.loc[:,targets].values*0.4+0.3*roberta_base_oof_test.loc[:,targets].values+\
xlnet_base_oof_test.loc[:,targets].values*0.3</code> </p>
<p>stacking improve both cv and lb about 0.02 . it help much.</p> |
Google QUEST Q&A Labeling | 41-th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><strong>DATA PREPROCESSING</strong></h1>
<p><strong>NER</strong>: we pretrained NER-model (bert-base) for code blocks replacement with special token [CODE]
<strong>Reg.exp.</strong>: we cleaned up LaTeX snippets and replaced them with [MATH] token
<strong>Crop strategy</strong>: from both side (we cut the middle part, if sequence was longer than 450 tokens)
<strong>Max. seq. lengths</strong>: 450</p>
<h1><strong>MODEL PARAMETERS</strong></h1>
<p><strong>Model architecture</strong>: roberta-large
<strong>Pretrained</strong>: Yes (from transformers library)
<strong>Model design</strong>: siamese bi-encoder (dot-product of question-answer pair -> sigmoid)
<strong>Pooling strategy</strong>: [CLS] token</p>
<h1><strong>TRAINING PARAMETERS</strong></h1>
<p><strong>Optimizer</strong>: AdamW (from transformers library)
<strong>Max grad. norm</strong>: 5
<strong>Epochs</strong>: 5
<strong>Batch size</strong>: 8 (1 * 8 accum. steps)
<strong>LR</strong>: 2.5e-05
<strong>LR-scheduler</strong>: linear
<strong>Warmup steps</strong>: 50
<strong>Seed</strong>: 228 :)</p>
<h1><strong>ENSEMBLE PARAMETERS</strong></h1>
<p><strong>Blend of</strong>: 7-fold models
<strong>Fold strategy</strong>: GroupKFold
<strong>Blend strategy</strong>: probabilities averaging</p>
<h1><strong>POST-PROCESSING</strong></h1>
<p>For each target:
1. Sort values (ascending)
2. Initialize an anchor value (the smallest one)
3. Iterate over each value and calculate the difference between it and anchor
4. If difference > TARGET-EPSILON, then collapse all values between anchor and current value into one bin
5. Update anchor (set current value)
6. Repeat from 3 until end
Finally, normalize bins to (0, 1) range</p>
<p>TARGET-EPSILONs (for each) target estimated via out of fold predictions (from 7 folds)</p>
<h1><strong>ANOTHER TRICKS</strong></h1>
<p><strong>SWA</strong>: implemented, but did not have time to submit
<strong>Categorical features</strong>: was concatenated with bi-encoder output. No improvements
<strong>Additional data</strong>: few additional data was added to fill up some flawed targets. Did not help
<strong>Combine crop strategies</strong>: different crop strategies (from left, rigth, both and middle parts) were combined as a TT augmentations. Did not help
<strong>Different pooling strategies</strong>: average pooling, average pooling of N last layers. [CLS] was the best one
<strong>Freezing</strong> differential, full, only n layers, etc. No freezing was the best</p>
<h1><strong>Reproducibility</strong></h1>
<p>All code to reproduce our experiments could be found here:
<a href="https://github.com/alexeykarnachev/kaggle_google_qa_labeling">https://github.com/alexeykarnachev/kaggle_google_qa_labeling</a></p> |
Google QUEST Q&A Labeling | 9th public and private solution: 30 models, 1 for each column? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd thank the organizers and my teammate, <a href="/murphy89">@murphy89</a> for this competition, and congrats to all the teams participating. I know the metric is not ideal, but I guess we all gained something from the competition.
Secondly, well, that is a click bait title, but we did something sort of the same when coming to blending parts. Anyway here is the overview of our solution:</p>
<h1>Overview</h1>
<ul>
<li><p>There is no special pre-processing of text, just basic cleaning like replacing & with & and < with < and some more broken signs.</p>
<h2>Models</h2></li>
<li><p>We all split the models into 2 parts: models for predicting first 21 question columns and for last 9 answer columns.</p>
<h3>My models</h3></li>
</ul>
<p><strong>Question part</strong></p>
<p>Question models take only text from question title and question body. If the text length is long, use head and tail method.
Using special tokens like site pages, categories...
Freezing last 4 layers, concatenating them and feeding into 1-layer LSTM before taking the average pool and max pool. Then 1-2 more FC layers at the end.
I used bert base, bert large and xlnet. Then from each model I vary a bit like different layers to freeze or instead of concatenating layers, just take the sum.
Sum of BCELogitsLoss on each column work better for me. I guess it's similar to weighted loss.</p>
<p><strong>Answer part</strong></p>
<p>Answer models take text from all 3 columns and also use head-tail method.
Not using special tokens, as my CV is worse with special tokens in answer part.
Freezing last 4 layers, and doing almost the same as question part, except that this time it's most ly 2-layer BiLSTM.
Mostly bert base and bert large of simply varying structures from the main model.
Normal BCELogitsLoss. </p>
<h2>Blending</h2>
<p>I believe this is the key part in boosting our score. We discovered the right blending strategy only on the last day, when we saved the oof predictions and cross validated offline. Although our single models have different CV score 0.411-0.426, they seemed better when blending together at certain weights. Later I found out that each model better in predicting some columns, so I use heuristic search to search for best CV score of each column. For those columns that depending heavily on post-processing, the search is on the final values after post-processing. This is why there is such title: 30 models, since I believe tuning some specific models for some columns would be better with post-processing part. In fact, some models with initial lower CV score seem to have higher weightage in this part. The best CV score after blending is 0.435, and after post-processing is 0.478.
The only regret is that when we were doing this, it was only 4 hours till the deadline, so some of our columns are not maximized in the final solution. In fact, when I tried again after the deadline, our CV could reach 0.482 with post-processing.</p>
<h2>Post-processing</h2>
<p>Our post-processing is not different from others'; on the contrary, I think it's even less effective since it's the simple rounding of predictions of some columns to the nearest values with certain thresholds. </p>
<h2>What did not work for me</h2>
<ul>
<li>Translation augmentation. I had translated back and forth in French and Italian, but adding them did not help.</li>
<li>SWA.</li>
<li>Multi-dropout for me, but it worked for my teammate's, Morphy, architecture.</li>
<li>Other models: xlnet-large, albert... The scores are worse in my case.</li>
</ul> |
Google QUEST Q&A Labeling | 47th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for Google as a host and Kaggle team! I really learned many NLP technics especially BERT form this competition.</p>
<h1>Key point of final model</h1>
<ul>
<li>Use first and last part of text if length exceed max_seq_length</li>
<li>Averaging 4 BERT base-uncased model</li>
<li>Postprocessing: Fit target distribution with train data (a part of target columns)
<ul><li>Detail is in below.</li></ul></li>
<li>Concatenate pool_output & sequence_output from bert_layer for GlobalAveragePooling1D</li>
<li>10 fold with MultilabelStratifiedKFold (thanks <a href="/ratthachat">@ratthachat</a> !)</li>
<li>Strong heart for solo participation</li>
</ul>
<h1>Didn’t work for me</h1>
<ul>
<li>Pre-training with stackoverflow data (150,000 sentences)</li>
<li>Multi sample dropout</li>
<li>The other models
<ul><li>Roberta</li>
<li>Albert</li>
<li>XLNet</li></ul></li>
<li>Concatenate question only output & answer only model </li>
<li>Concatenate category MLP with BERT model</li>
<li>LSTM head instead of Dense with BERT model</li>
<li>Freeze half of BertLayer for reducing model complexity</li>
<li>Skip half of BertLayer for reducing model complexity</li>
<li>USE + MLP</li>
<li>LSTM model with gensim embedding</li>
<li>custom loss
<ul><li>BCE & MSE</li>
<li>focal loss</li></ul></li>
<li>Word count feature</li>
<li>Concat title and question_body as a one block (removing ["SEP"] between them)</li>
<li>Up-sampling for imbalance target column</li>
</ul>
<h1>My postprocessing class</h1>
<p>```
class OptimPreds(object):
def <strong>init</strong>(self, df_train):
self.score_range_dict = {}
for i, c in enumerate(df_train.columns[11:]):
cnt = df_train[c].value_counts(normalize=True).sort_index()
self.score_range_dict[i] = [cnt.index.values.tolist(), cnt.values.tolist()]
def predict(self, preds, i):
return pd.cut(rank_average(preds), [-np.inf] + np.cumsum(self.score_range_dict[i][1])[:-1].tolist() + [np.inf], labels = self.score_range_dict[i][0])</p>
<p>def optim_predict(pred, do_round=True, target=[ 2, 5, 7, 9, 11, 12, 13, 14, 15, 16, 19]):
for i in range(pred.shape[1]):
if i in target:
pred[:,i] = optim.predict(pred[:,i], i)
return pred
```</p> |
Santa's Workshop Tour 2019 | How to win Santa's Workshop Tour | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I would like to thank <a href="/inversion">@inversion</a> for preparing this nice competition. It was really fun.
Second, sorry that it took so long time till I wrote this post.
Third, yes most of you were right. I also used mixed integer programs (MIPs).</p>
<p>But now let's start!</p>
<p>My way to an optimal solution started with recognizing that the “nonlinear” objective function can be linearized by enumerating all possible combinations of people who may visit on one day and on the day after.
After that, I had a short look into the data and believed that not many families will be assigned to a non preferred day. Nevertheless, I didn't want to remove this possibility completely. That's what led me to the following</p>
<h1>Mixed Integer Linear Programming Relaxation</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2Fe46f34c1bb4058e65a02dc2a2404d14c%2Fmip_relaxation.png?generation=1579800775368803&alt=media" alt=""></p>
<ul>
<li><code>x_{f,d^f_p}</code> is the binary variable which is <code>1</code> iff family <code>f</code> is assigned to its preference <code>p</code> for <code>p=1,...,10</code> and <code>x_{f,d^f_{11}}</code> is <code>1</code> iff the family is not assigned to one of its prefered days, where <code>d^f_{11}</code> is set to “day” <code>101</code>.</li>
<li><code>y_{d,i,j}</code> is the binary variable which is <code>1</code> iff day <code>d</code> has <code>i</code> and day <code>d+1</code> has <code>j</code> people assigned. Note that we also introduced variables <code>y_{100,i,j}</code> for <code>i</code>≠<code>j</code> , for the ease of presentation. In this setting we can fix every variable <code>y_{100,i,j}</code> to <code>0</code> for <code>`i</code>≠<code>j</code>. Clearly, I didn't add these variables in my implementation.</li>
<li><code>z_d</code> is the “continuous” variable representing how many people are assigned to day <code>d</code>.</li>
<li>Term <code>(1)</code> is the objective function which we want to minimize, where <code>pc(p)</code> represents the preference and <code>ac(i,j)</code> the accounting cost.</li>
<li>Equation <code>(2)</code> ensures that each family is either assigned to one of its prefered days or to “day” <code>101</code> representing that the family is not assigned to one of its preferences.</li>
<li>Equation <code>(3)</code> ensures that day <code>d</code> is assigned to a number of people visiting on this day and to a number of people visiting on day <code>d+1</code>.</li>
<li>Equation <code>(4)</code> is in some way “flow conservation” ensuring that the number of people of consecutive days coincide.</li>
<li>Equation <code>(5)</code> couples the consecutive day variables with the day quantity variables.</li>
<li>Inequality <code>(6)</code> ensures that the number of people assigned to a day is at least the number of people assigned to that day which they prefer, where <code>n_f</code> is the number of family members of family <code>f</code>.</li>
<li>Equation <code>(7)</code> ensures that the number of people assigned to all days equals the number of family members. </li>
</ul>
<p><strong>Note, that in general a solution of this MIP don't have to be feasible for Santa's problem. Furthermore, I believe it can be as challenging as to solve Santa's problem from scratch to make a solution of this MIP feasible, if the data is bad .</strong></p>
<p>Nevertheless, the data did not look that bad and it turned out that I never had to “repair” a solution.</p>
<p>After roughly two hours Gurobi found a high quality solution and naturally I immediately submitted it to the leaderboard. Due to a mistake in my implementation this resulted in a solution which was scored with a value of ><code>34145044298</code>, the overall worst score which was shown on the leaderboard in the whole duration of this competition.
After fixing the bug I got a solution with a value ≤<code>74589</code>. After three hours Gurobi produced a solution with a value of ≤<code>70913</code> which did not further improve within 24 hours.
Nevertheless, I did not used this solutions, since I worked in parallel on a reduction on the number of variables.</p>
<h1>Lower Bounds, Upper Bounds, and Size Reduction</h1>
<p>Since the number of variables is huge, I was interested in lower bounds on the preference costs (<code>LB_pref</code>) and upper bounds on the optimal value for the whole problem (<code>UB_opt</code>). The reason for this is, that given <code>LB_pref</code> and <code>UB_opt</code> we can bound the accounting costs from above by <code>UB_opt - LB_pref</code>. This led me to the following MIP formulation, only optimizing the preference costs (8). This program was solved to optimality in less than a minute with <code>LB_pref</code>≥<code>43622</code>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2F13a83bd6a1317ec6bf49c1e162073fc4%2Fmip_preference.png?generation=1579808394195922&alt=media" alt=""></p>
<p><strong>So, I had a lower bound on the preference cost: <code>43622</code></strong></p>
<p>For the upper bound we are lucky, since kaggle provides public leaderboards. At this time <a href="/wataorz">@wataorz</a> was in top position with a solution score of ≤<code>70888</code>. Thus, the accounting costs are bounded from above by <code>27266</code>.
To Further improve the lower bound on the preference costs I removed all variables <code>y_{d,i,j}</code> with larger accounting penalty than <code>UB_opt - LB_pref</code>≤<code>27266</code>, added a constraint bounding the accounting costs (9), and solved the following programm.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2F1852f21aee7d8da52418294cec548d27%2Fimproved_lower_bound.png?generation=1579808583934573&alt=media" alt=""></p>
<p><strong>This program runs 10 minutes and gives a lower bound on the preference costs of <code>LB_pref</code>≥<code>54412</code></strong></p>
<p>Note, that if we have an improved lower bound on the preference costs or an improved upper bound on the optimal costs we can rerun this program to get possibly an improved lower bound on the preference costs. In particular, I could have rerun the program directly but, since our lower bound improved, but I decided no to.</p>
<p>After that, I removed all variables <code>y_{d,i,j}</code> with penalty strictly larger than <code>UB_opt - LB_pref</code> from the first MIP and solved it.</p>
<p><strong>This produces a solution with value ≤<code>70134</code> in roughly 70 minutes.</strong></p>
<p>Since I had now a good quality solution, I decided to try an improvement step.</p>
<h1>MIP Large Neighbourhood Search</h1>
<p>The last “tool” I used was a MIP representing a “large” neighbourhood search. Given a feasible solution, it restricts the number of people for each day to a given threshold <code>TR</code> from the number given by the initial solution (10). The day load a day <code>d</code> of the start solution is represented by <code>l_d</code>. Again, I removed all variables <code>y_{d,i,j}</code> with penalty strictly larger than <code>UB_opt - LB_pref</code> and all variables not within the threshold.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2F8856f909c45e84298849738e9d59ed23%2Flns.png?generation=1579809169263321&alt=media" alt=""></p>
<p>I am not sure but I think I ran this program with threshold values <code>20</code>≤<code>TR</code>≤<code>120</code> which led me to an optimal solution. For one large <code>TR</code> I solved the program to optimality. So, I knew that in this huge neighbourhood there is not better solution. I had not much hope to solve the program to optimality with larger values. Thus, I decided to try to “prove” optimality and ran the first MIP with all bounds I had and my best solution. It took about a day, but then Gurobi were proved optimality. For this run I changed the parameters of Gurobi to aggressively work on the bound.</p>
<p>Note, that my work was not that straightforward how I presented it here. I did many things in parallel and ran the above MIPs with new start solutions and improved bounds.</p>
<p>I hope you have fun with this post.</p>
<p><strong>Please let me know, if you see any mistakes or have questions!</strong></p> |
Santa's Workshop Tour 2019 | Our trick | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1) Use LP 67xyz LP solution
2) Core trick in that LP is that <code>sum_i M_d[i][j] == sum_k M_{d+1}[j][k]</code> (which is not needed but pushes LP higher)
3) Our trick for easier cutting and branching is to have <code>sum_i M_d[i][j]</code> as separate variable, so solver can branch on it.</p>
<p>Side note:
We spend too much time on formulation with 175 variables per day and having 175*175 constrains. We had tricks like convex hulls, lazy constraints, ... but that was not that great.</p> |
Santa's Workshop Tour 2019 | MIP formulation on Gurobi, CPLEX and CBC+PuLP | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats winners! I was a little bit late...</p>
<p>I've shared a simple MIP formulation on Gurobi, CPLEX, and CBC+PuLP.</p>
<p><a href="https://github.com/tkm2261/kaggle_santa2019_youtube">https://github.com/tkm2261/kaggle_santa2019_youtube</a></p>
<p>This code is a little bit different from the one I used but simple. It might be better for learning how to formulate and solve mixed-integer programming problems.</p>
<p>The difference is that I added several redundant constraints to lift the bound.</p>
<p>I've also uploaded a tutorial video and its slide. Although It's Japanese, maybe you can understand some parts. I'd be happy to answer any questions in English in this thread.</p>
<p><a href="https://youtu.be/0AdaTRU--YE">https://youtu.be/0AdaTRU--YE</a>
<a href="https://www.slideshare.net/tkm2261/kaggle-santa-2019mip">https://www.slideshare.net/tkm2261/kaggle-santa-2019mip</a></p> |
2019 Data Science Bowl | 21st place solution (link to R kernel) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First at all, thanks to the BS Kids and Kaggle teams for this great competition and congratulations to the winners and medallists.</p>
<p>I couldn't work in this competition as much as how I would have liked so it's felt really good to get a medal. It's a bit disappointing to be so close to gold but I can't complain as my solution is very simple and I think I was lucky with the final result.</p>
<h2>Feature engineering</h2>
<p>I generated 754 features, most of them very similar to the ones you can find in public kernels. For sessions of the type "Game" I created features taking account the different rounds (most games have three rounds).</p>
<h2>Feature elimination</h2>
<p>I only drop duplicated and very similar (>.99 equal values) variables. I ended with 649 features.</p>
<h2>Model</h2>
<p>I used the <a href="https://www.kaggle.com/c/prudential-life-insurance-assessment/discussion/19010">1st place solution</a> and the <a href="https://www.kaggle.com/c/prudential-life-insurance-assessment/discussion/19003">2nd place solution</a> of the <a href="https://www.kaggle.com/c/prudential-life-insurance-assessment">Prudential Life competition</a> as inspiration. My model consist of, first, three lgb binary classifiers (0 vs 123, 01 vs 23, 012 vs 3) with 5-Fold CV. Then, I use the results of these models plus the assessment title as features of a linear regression model to get the final continuous prediction.</p>
<h2>Threshold definition</h2>
<p>I use the <code>optim</code> R function with the Nelder-Mead algorithm. To get the initial coefficients I used the golden section method that is explained <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107#480970">here.</a> I usually got a better score with this two step process that using any of this two methods on their own. </p>
<h2>.</h2>
<p>You can see the kernel <a href="https://www.kaggle.com/artmatician/21st-place-solution?scriptVersionId=27558325">here</a>.</p> |
Santa's Workshop Tour 2019 | John does California Odyssey (with code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I was very lucky to team with Alain and Stéphane. Without them I would not have entered this competition as I was already engaged in another one. And without them I would not have found the optimum that fast, therefore would not get a gold medal for sure. Contributions from team members were all significant. We came to this via different point of views, which was very fruitful. </p>
<p>We started from this notebook <a href="https://www.kaggle.com/vipito/santa-ip/">https://www.kaggle.com/vipito/santa-ip/</a></p>
<p>It has several interesting components:
- A LP model for family to days assignment. <br>
- Max constraints on difference between successive days occupancy
- A local search to improve solutions</p>
<p>We then explored a number of variations and improvements. When one of us was finding a solution it was shared with the other ones so that they can start from it in their next run. We iterated over many models and runs. Typically we would not let something run for more than a day. Indeed, the better the starting point the better the end point! And also, when using MIP models, the better the starting point the smaller the model as many variables can be set to 0 upfront.</p>
<p>The things we tried include: data/solution analysis , local search, LNS, linearization, approximation, simplification, symmetry breaking. Let's look at each of those, in nor particular order.</p>
<p><strong>Local Search</strong></p>
<p>We started form the stochastic search of the public kernel, but then moved to a search similar to what is used in max flow algorithm: find a chain of family reassignment that keeps occupancy mostly unchanged and improve cost. For instance, comparing two solutions found early in our endeavor we saw that they difference in only few places:</p>
<pre><code>Family: 261 83 - 67
Family: 779 67 - 7
Family: 798 35 - 45
Family: 2926 1 - 35
Family: 3215 25 - 83
Family: 4716 45 - 1
Same: 4994
</code></pre>
<p>If we look carefully, we see that moves can be chained:</p>
<pre><code>35-45-1-35
25-83-67-7
</code></pre>
<p>We have one 3 cycle and one 3 path that capture all changes.</p>
<p>We coded a systematic search for chains up to a given length. This was way more effective that a brute fore on possible family swaps.</p>
<p><strong>data/solution analysis</strong></p>
<p>After our first few solutions we found that the distribution of choices ranks was highly skewed. Most families had one of their first 4 choices. A first consequence is to limit model complexity by only considering choices up to 4, or 6, depending on the runs. Unless mistaken, all families got one of their top 6 choices in our optimal solution. We relaxed this at the end when we proved optimality. </p>
<p>Another example of data analysis was to look at <code>gap(d)</code> which is absolute differences of occupancy of a day d. n Here is a plot gap(d) as function of the occupancy of the day for a solution of cost 69158.xxx</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F6081dcbb4d5e819d0978db28c2bd5a32%2Fimage.png?generation=1579180045541292&alt=media" alt=""></p>
<p>We see there is a simplex convex hull, which can suggest additional conditional constraints. For instance:
if (number(d) >= 126), then (number(d)-number(d-1)) <= a-b*number(d) </p>
<p>where <code>number(d)</code> is the occupancy on day <code>d</code>, <code>a</code>and <code>b</code>two parameters we set for each run.</p>
<p><strong>Cost approximation</strong></p>
<p>The accounting cost function is non convex, which makes it tricky to optimize. Here is a log plot of it capped by a high value (100,000 I think)..</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F021ea4250ad15952825847a03721168c%2Fcost%20(1" alt="">.png?generation=1579180264664895&alt=media)</p>
<p>Even if non convex, it has some clear properties. It is increasing with the gap more than linearly. This led to the idea of minimizing the sum of their squares. Unfortunately this was not very effective.</p>
<p>Other approximations were based on conditional constraints of the form:</p>
<pre><code>if number(d) &gt;= a, then cost(d) &gt;= piecwise(gap(d))
</code></pre>
<p>i.e. approximating the cost by piecewise functions that minor the actual cost.</p>
<p>When we approached the end of our odyssey we switched to an exact representation of the cost via the now well known 3M variable model, first shared by <a href="/hengck23">@hengck23</a> . We actually tried this model the first day we entered the solution, but solving it from scratch did not seem feasible at the time.</p>
<p><strong>Large Neighborhood Search</strong></p>
<p>Local search is powerful, but it does not exploit the flow structure present in the model and it does not allow for massive changes in family assignments. We used another form of search that could lead to large changes in family assignment. The idea was to start from a solution, keep its occupancy per day, then constraint occupancy to be close to that initial occupancy, and solve the problem as a MIP. This is very effective to explore a large neighborhood of the initial solution, hence its name. It led us to find the deep local optim at 68910.94. But this could not lead us to an optimal solution. Reason for that is clear when we look at occupancy per day for our optimal solution and for that deep loacl optimum:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fd6f10bd83c5ca4ac0e3f5dcf86169954%2Fimage%20(10" alt="">.png?generation=1579181603445237&alt=media)</p>
<p>We see that the optimal solution has one extra dip to 125 compared to the other solution. There is no way our local search or our large neighborhood search would have found it.</p>
<p><strong>Exact model</strong></p>
<p>After few days we switched to an exact model (see description below) and used various subset of it. Subsets can mean: limiting family choices to top K (k = 4, or 6, in practice). Limiting the max value of the gap. These limits were implemented by setting variables to 0 before solving the problem. The model was always initialized by a solution (mi start). Same for variables used to represent the cost. Those with a large coefficient were set to 0.</p>
<p>Then all the tricks above were used to get new solutions quickly from known solutions. Another trick was to fix some day occupancy to 125. At a point, starting from a solution of cost 68914.2801, limiting to best 6 choices, and fixing 4 day occupancy to 125 we found an optimal solution overnight.</p>
<p><strong>Optimality Proof</strong></p>
<p>Finding a solution of optimal cost is not the same as proving there is no better solution. We had a slight hope that none of the teams in front of us on the LB found the actual optimum. Our run showed they did find the optimum. The model is very similar to the 3M model shared publicly.</p>
<p>A binary variable x for each pair (family,day)
A binary variable y for each pair (day,occupancy of the day)
A binary variable z for each triple (day, occupancy of the day, occupancy of next day)</p>
<p>The constraints are straightforward except one that was disclosed by <a href="/hengck23">@hengck23</a> : summing the variables z along one of the last two axis is equal to one of the variable y. </p>
<p>Code for a cplex implementation of the full model is available at <a href="https://github.com/jfpuget/Kaggle_Santa_2019">https://github.com/jfpuget/Kaggle_Santa_2019</a>. We ran this on a machine with 20 cores at 2.6GHz . It uses 16 GB. It proves optimality in less than 3 hours, when mip start is our optimal solution.</p>
<p>Before that run we tried to add symmetry breaking constraints with the hope of speeding proof. Indeed, once families start to not getting their preferred choices a lot of family assignments yield the same cost. Symmetry breaking as effective in a way as it halved the number of nodes for the proof, but running time was a bit larger.</p>
<p>Edit: Our full model is exactly the same as the one described by <a href="/frankfisk">@frankfisk</a> : <a href="https://www.kaggle.com/c/santa-2019-revenge-of-the-accountants/discussion/126380">https://www.kaggle.com/c/santa-2019-revenge-of-the-accountants/discussion/126380</a></p> |
Google QUEST Q&A Labeling | 4th place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>At first, I want to say thanks for all of my teammates :)</p>
<h2>Our final submission is an ensemble of 2 models</h2>
<h3>About Model</h3>
<p>model1 takes 3 texts as input, and has 3berts in it
Bert1.
input is question_title + question_body and predict only columns that are relevant to question (first 21 of columns)
Bert2.
input is question_title + answer and predict only columns that are relevant to answer
(last 9 of columns)
Bert 3.
input is question_body + answer and predict all columns
and have 1 linear layer which input is concat(bert1_out, bert2_out, bert3_out)
and calculate bce for
loss1 for bert1 prediction & columns[:21],
loss2 for bert2 prediction & columns[-9:],
loss3 for bert3 prediction & all columns,
loss4 for last linear layer prediction & all columns
&backward each loss.</p>
<p>model2 is just xlnet version of it.</p>
<h3>About Training</h3>
<p>-> Spanish -> English argumentation</p>
<p>Flexible rating module for encoding 2 texts
(ex. if len(text1) < max len1 & Len(text2) > max len2, max len2 = max len2 + (max_len1 - len(text1)))</p>
<h3>About Post Processing</h3>
<p>Baba shared the idea here ↓
<a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129831">https://www.kaggle.com/c/google-quest-challenge/discussion/129831</a></p>
<p>This competition was really fun, thanks all and let's compete at other competition again !!!</p> |
2019 Data Science Bowl | 16th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to everybody for a great competition. Congratulations to the prize winners. I really enjoyed this one even if dealing with QWK was infuriating at times.</p>
<p>The outline of my solution is as follows:</p>
<p>Feature engineering was fun. I used a lgb model to assess new features working pretty much on minimising the MSE. In the end I developed a lot of features then instigated a cull using CV to reduce the number to a final 158 features.</p>
<p>The top features were those based on the previous performances in the title that we wanted to predict as well as those in other assessment type activities. Counting occurrences of certain words in previous activities (like “misses”, “rounds”) also proved helpful if split by the title in which they occurred. Features based on the amount of game time spent on each event code in each title also produced some good features. (For example, event code 4070 in activity 12 was particularly helpful.)</p>
<p>Having settled on a feature set, I then used this in a standard lgb model using MSE as the objective, ran it through a NN as well as augmenting the data with the unused test set assessments for a third model. They all produced similar results in CV. An ensemble of these three models produced my final model. In common with many, I used a repeated random selection of the 3614 installation ids, truncated, to estimate a QWK. For the third model above I used a classification objective. I then optimised each class probability estimate using the truncated CV setup. This produced an optimal output of 1.62p1+1.74p2+2.64p3. (A standard output of 1p1+2p2+3p3 scored well but not quite optimally.) </p>
<p>Blending and thresholding were tricky but the truncated CV setup seems to work ok to optimise QWK. I was least sure about this step though it appears to have been reasonably accurate with regards to the private lb. I pretty much ignored the public lb scores but was still pleased to survive the shake-up!</p> |
Google QUEST Q&A Labeling | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Kaggle and hosts for providing us with this challenging competition and congratulations to all winners! Also I'd like to thank my teammates <a href="/aerdem4">@aerdem4</a>, <a href="/zhesun">@zhesun</a> and <a href="/jingqinnl">@jingqinnl</a> for their hard work and great insights. This was my first time teaming up and definitely made the competition process all the more enjoyable for me.</p>
<p>In short, our final submission is based on a weighted blend of 4 Siamese/double transformer architectures and one USE + feature engineering model coupled with a rounding based post-processing approach.</p>
<h3>1. Post-processing/Ensembling</h3>
<p>I'll start of describing our post-processing strategy, given that as for many other teams this had a massive impact on our performance. In combination with weighted ensembling it improved our 10 fold GroupKFold CV by ~0.05. The general idea is based on rounding predictions downwards to a multiple of some fraction <code>1/d</code>:
<code>
def scale(x, d):
if d:
return (x//(1/d))/d
return x
</code>
So if <code>d=4</code> and <code>x = [0.12, 0.3, 0.31, 0.24, 0.7]</code> these values will get rounded to <code>[0.0, 0.25, 0.25, 0.0, 0.5]</code>. For each target column we did a grid search for values of <code>d</code> in <code>[4, 8, 16, 32, 64, None]</code>. </p>
<p>In our ensembling we exploited this technique even further, applying the rounding first to individual model predictions and again after taking a linear combination of model predictions. In doing so we did find that using a separate rounding parameter for each model, OOF score improvements would no longer translate to LB. We addressed this by reducing the number of rounding parameters using the same <code>d_local</code> across all models:
<code>
y_temp = 0
for pred, w in zip(model_preds, ws):
y_temp += w * scale(pred, d_local) / sum(ws)
y_temp = scale(y_temp, d_global)
</code>
All ensembling parameters - 2 rounding parameters and 5 model weights - were set using a small grid search optimising the spearman rho metric on OOFs while ignoring question targets for rows with duplicate questions. For all these smart stacking and post-processing tricks the credit goes to <a href="/aerdem4">@aerdem4</a>. </p>
<h3>2. Models</h3>
<p>Our final ensemble consists of:
- Siamese Roberta base (CV 0.416)
- Siamese XLNet base (CV 0.414)
- Double Albert base V2 (CV 0.413)
- Siamese Bert base uncased (CV 0.410)
- USE + Feature Engineering model (CV 0.393)</p>
<p>Listed CV scores are 10 fold GroupKFold w/o post-processing. Although, the transformer models scored significantly higher in terms of CV, the USE + Feature Engineering still contributed significantly in the stack (about 0.005 boost on CV and LB).</p>
<p>All transformer models were implemented using Pytorch and used the pretrained models from the huggingface Transformers library as backbones. Transformer models were trained locally on one RTX 2080Ti. The USE + feature engineering model was implemented with Keras and trained using Kaggle kernels (code available here <a href="https://www.kaggle.com/aerdem4/qa-use-save-model-weights">https://www.kaggle.com/aerdem4/qa-use-save-model-weights</a>. As this model was developed by my teammates I will rely on them to provide more details regarding features, architecture and training in the comment section if needed. </p>
<p>Apart from the pretrained backbones all transformer architectures were very similar:
- <code>question_title</code> + <code>question_body</code> and <code>question_title</code> + <code>answer</code> are fed separately as input to a transformer. As for other top teams, this was easily the biggest difference maker in terms of architecture, adding up to 0.01 to CV scores.
- Average pooling. This improved CV for some models (~ 0.002), but was similar to CLS output for other models.
- Custom 2 layer deep regression head also taking one hot encoded category feature as input. Improved CV ~0.005 relative to simpler linear regression heads.</p>
<p>Only difference between the 4 transformer architectures is that Roberta, XLNet and Bert all used a Siamese design - i.e. the same transformer (shared weights) is used for both question and answer inputs. For Albert using a separate transformer (non-shared weights) worked better. </p>
<h3>3. Training</h3>
<p>The training followed the exact same format for all four transformers and consisted of 2 stages.</p>
First stage:
<ul>
<li>Train for 4 epochs with huggingface AdamW optimiser.</li>
<li>Binary cross-entropy loss.</li>
<li>One-cycle LR schedule. Uses cosine warmup, followed by cosine decay, whilst having a mirrored schedule for momentum (i.e. cosine decay followed by cosine warmup). </li>
<li>Max LR of 1e-3 for the regression head, max LR of 1e-5 for transformer backbones.</li>
<li>Accumulated batch size of 8</li>
</ul>
Second stage:
<p>Freeze transformer backbone and fine-tune the regression head for an additional 5 epochs with constant LR of 1e-5. Added about 0.002 to CV for most models.</p>
<h3>4. Pre-processing</h3>
<p>No special pre-processing on text inputs, just the default model specific tokenisers provided by huggingface. For target variables we did find a trick. First rank transform and follow up with min-max scaling. This made it so that target values were much more evenly distributed between 0 and 1, which played well with BCE loss. Gave 0.003 - 0.005 boost on CV.</p>
<p>Code for training the transformer models and post-processing/ensembling is available here: <a href="https://github.com/robinniesert/kaggle-google-quest">https://github.com/robinniesert/kaggle-google-quest</a></p> |
Google QUEST Q&A Labeling | 15th solution with codes | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Congrats to all winners, and thanks to my teammates</strong>! It's a long journey to finish this competition and get a silver medal, it's the first time for me to work with nlp projects.</p>
<h2>Our solution could be summarized as:</h2>
<ol>
<li>Post processing, our idea is to group values to discrete values existing in training set. We chose columns to apply post processing by comparing oof. Post processing could be found in <a href="https://www.kaggle.com/jionie/models-with-optimization-v5/">my inference kernel</a></li>
<li>Ensemble, our final submission consists of 8 base models as you can see in the inference kernel, the best model is two part roberta base model which got 0.412 oof without post processing and 0.439 lb with post processing.</li>
<li>Modified pooler layer and multiple hidden_states , we used multiple hidden_states for the input of pooler layer inspired from <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/123770">this great post</a> we followed source codes in transformers for pooler class. This could also be found in my inference kernel. We chose hidden_state_layers based on validation results.</li>
<li>Differential learning rate, we used differential learning rate inspired by <a href="https://www.kaggle.com/melissarajaram/roberta-fastai-huggingface-transformers">this great kernel</a>. With differential learning rate, our models got higher oof and converged faster.</li>
<li>Two part model, inspired from discussions, we trained models in two style. One is using title + question_body + answer as input for question labels and answer labels; one is use title + question_body for question labels and using title + answer for answer labels. The second style gives higher oof and lb, I think it's because more information for one problem. (Also using max_len 768 for xlnet gets higher oof is the same idea.</li>
<li><p>Data augmentation, we used nlpaug for some basic data augmentation, also we used google translate to back translate some content from other languages. I decided to do this at very early stage, so I'm sorry I can't tell how much it could boost. It won't be much I think.</p>
<h2>Things don't work for us:</h2></li>
<li><p>T5</p></li>
<li>Extra tokens</li>
<li>Large models, large models gave similar oof comparing with base models and it's hard to inference, always slow and run out of resources. <br>
<h2>Our best single model:</h2></li>
</ol>
<p><a href="https://www.kaggle.com/leonshangguan/roberta-single-models-with-optimization">https://www.kaggle.com/leonshangguan/roberta-single-models-with-optimization</a></p>
<h2>Our codes:</h2>
<p><a href="https://github.com/jionie/Google-Quest-Answer">https://github.com/jionie/Google-Quest-Answer</a></p> |
Peking University/Baidu - Autonomous Driving | 37th place brief writeup | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the host and useful kernels and discussions.
I learned a lot during this competition.</p>
<p>My code is based on <a href="https://www.kaggle.com/phoenix9032/center-resnet-starter">Center-Resnet Starter</a>.
My public LB was 0.094 and private LB was 0.086.</p>
<h2>Model</h2>
<ul>
<li>model architecture is the same as Center-Resnet Starter Kernel except below</li>
<li>feed mask image and x,y position into model</li>
<li>predict heatmap with focal loss following CenterNet paper</li>
<li>change regression target from (x,y,z) to (u-diff, v-diff, z) following CenterNet paper</li>
<li>regress log(z) instead of z ∵ depth affects by multiplication and log(z) distribution is more balanced than z distribution</li>
</ul>
<h2>Data Augumentation</h2>
<ul>
<li>(x/z, y/z) position jittering</li>
<li>slight gauss noise</li>
<li>slight randomContrastBrightness</li>
</ul>
<h2>Preprocessing / Post Processing</h2>
<ul>
<li>masking prediction using given masks</li>
<li>restore color distorted test images.
for each image and each channel, stretch [0, '95 percentile value'] to [0, 255]
<ul><li>probably no effect on LB, pointed out by <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127060">this discussion</a></li></ul></li>
</ul>
<h2>Others</h2>
<ul>
<li>replacing confidence by Y-position has no effect. At this point, I doubted the evaluation metric.</li>
<li>remove corruputed 5 train images</li>
<li>adaptive heatmap threshold to predict at least one car per an image. discarded it since LB does not change</li>
<li>increase epochs and change scheduling to ReduceLROnPlateau</li>
<li>2x weights to regression targets to balance two types of losses. It improved LB</li>
<li>add (x,y) position info as head input and add two 1x1 convs to head.
better localCV and private LB (my final sub score + 0.002)
I discarded it since public LB was bad (my final sub score - 0.007)</li>
</ul>
<h2>What did not work for me</h2>
<ul>
<li>smaller input size (w,h = 1536,512)</li>
<li>larger input size + grad accumulation (accumlation_step=2)</li>
<li>deformable convolution V2 (maybe because of my poor modeling skill)</li>
<li>bins with in-bins regression for pitch (bins=4) following CenterNet paper</li>
<li>predict pitch from camera view following CenterNet paper</li>
</ul>
<h2>What I should have tried</h2>
<ul>
<li>improve predictions of large cars. Below may be relevant:
9th solution : difference models for cars at difference positions
5th solution : FPN network</li>
<li>ensemble</li>
<li>other backbones (DLA34 or resnet34 or efficientnet-b0)</li>
<li>change the each-car distance threshold</li>
<li>flip augmentation</li>
<li>use pretrained model (and use mask info for loss calculation)</li>
</ul>
<hr>
<p>code is <a href="https://github.com/lisosia/kaggle-pku-autonomous-driving">here</a></p> |
Google QUEST Q&A Labeling | 19th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you for holding this great competition google and kaggle team.
I think the balance of need for latest NLP knowledge and ideas applied to this specific task is the exciting point of this competition! I could not get the gold medal, but I’ve enjoyed and learned a lot.</p>
<p>I want to share my insights which I’ve achieved from more than 150 experiments. My <a href="https://github.com/guchio3/kaggle-google-quest">codes</a> and <a href="https://www.kaggle.com/guchio3/refactored-bert2-robert1-xlnet1-blend-pseudo1">kernels</a> are publicly available now.</p>
<h1>preprocess</h1>
<ul>
<li>add special token to separate title and question.
<ul><li>ex. [CLS] title [NEW_SEP] question [SEP] answer [SEP]</li></ul></li>
<li>add category info to the head of the title.
<ul><li>ex. [CLS] CAT-TECHNOLOGY title [NEW_SEP] question [SEP] answer [SEP]</li>
<li>This method should be better than the case just concatenating the category to the last classification layer because the hugging-face model can use the category info while it retrieves the texts.</li></ul></li>
<li>trim head and tail part of the texts <a href="https://github.com/guchio3/kaggle-google-quest/blob/decffc69d5657f5114970eb2ea226df8ec8cfaf6/scripts/refactor/datasets.py#L137-L157">like this</a>.
<ul><li>ex. I have a dream that one day on the red hills of Georgia, the sons of former slaves and the sons of former slave owners will be able to sit down together at the table of brotherhood.
-> I have a dream that + at the table of brotherhood.</li></ul></li>
</ul>
<h1>modeling (used pytorch hugging-face library)</h1>
<ul>
<li>GKF using question body</li>
<li>AVG pool final layer seq output and use it for the input 1 linear layer</li>
<li>use 2 separate models for question type label and answer type label
<ul><li>(title-max-len, question-max-len, answer-max-len) : (30, 239*2, 0) and (30, 0, 239*2) for each (if each part is less than the max-len, the other parts use the rest len).</li></ul></li>
<li>Ensemble of four models (1 : 0.5 : 0.5 : 1)
<ul><li>bert-base-uncased (best snapshot)</li>
<li>robert-base (best snapshot, second-best snapshot)</li>
<li>xlnet-base-cased (best smapshot)</li></ul></li>
<li>batch size 8, 6 epochs (freezing is crucial)
<ul><li>freeze only hugging-face model for the first 1 epoch, and unfreeze it the rest 5 epochs</li></ul></li>
<li>BCEWithLogitsLoss</li>
<li>Adam w/ cosine scheduler (3e-5 -> 1e-5)</li>
<li>1 cycle pseudo labeling
<ul><li>concat soft target, hard target, and half hard target. Half hard target is the target where the threshold is applied to the col only if the application improve the oof score</li></ul></li>
</ul>
<h1>postprocess</h1>
<ul>
<li>Search and apply thresholds
<ul><li>Get optimized thresholds using scipy.optimize.minimize for oofs. The initial value is calculated from the percentile point of the oofs. (<a href="https://github.com/guchio3/kaggle-google-quest/blob/master/scripts/get_optR3.py">ref</a>)</li>
<li>Each thresholds is applied to a column only if the oof score improves by using it.</li>
<li>Each thresholds is applied to a column only if the cardinality of a column is more than one for public/private part of the test set.</li></ul></li>
<li>Set 0 or 0.5 to question-type-spelling based on the host (kernel based)</li>
</ul>
<h1>What did not work for me</h1>
<ul>
<li>Larger batch size</li>
<li>Longer sequence size
<ul><li>Longer position embedding matrix than 512</li>
<li>Cyclic position ids (0 - 29 for title, 0 - 511 for quesiton, 0 - 511 for answer)</li></ul></li>
<li>Shorter sequence size</li>
<li>Using meta features except for category
<ul><li>host, word counts, tokenized word counts, etc …</li></ul></li>
<li>Using last N hidden outputs of the hugging-face models</li>
<li>Augmentations</li>
<li>Manual text preprocessings</li>
<li>Changing loss functions
<ul><li>MSE, Pair type loss, Focal loss</li></ul></li>
<li>Changing optimizers and schedulers
<ul><li>AdamW, RMSProp, SGD, Step type schedulers</li></ul></li>
<li>Using SWA</li>
<li>Expert models for difficult or imbalanced classes</li>
<li>Oversampling for the minor values of imbalanced classes</li>
<li>Using the other type of models for ensemble
<ul><li>RNN based models, GPT2, XLM</li></ul></li>
</ul>
<p>P.S.
Like some people say, I used so long time to reproduce the result of <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">great TF kernel</a> by pytorch. At first, I concluded that the situation is caused by the pytorch’s nature <a href="https://discuss.pytorch.org/t/suboptimal-convergence-when-compared-with-tensorflow-model/5099/18">like this</a>. However, I noticed that the kernel use snapshot ensemble in the callback function, and this should cause the difference of the performance. I think pytorch users are not familiar with callback functions, so we should overlook it.</p> |
Santa's Workshop Tour 2019 | My optimal solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used Gurobi 9.0 with a non limited evalution license. Thanks Gurobi for giving me an avalution license. SCIP in ortools can only get me to 68910.94.
- I fixed the occupancy of days at this array and run MIP model around +/-5 values of each day.
<code>estimate_occupancies = np.array([300, 287, 300, 300, 286, 262, 250, 250, 271, 296, 300, 300, 279,
264, 256, 273, 294, 282, 259, 228, 196, 164, 125, 300, 300, 295,
278, 263, 257, 253, 279, 276, 252, 219, 188, 156, 125, 283, 271,
248, 216, 184, 159, 125, 300, 282, 257, 226, 194, 161, 125, 286,
264, 236, 201, 168, 137, 125, 266, 241, 207, 166, 125, 125, 125,
253, 225, 190, 147, 125, 125, 125, 227, 207, 175, 129, 125, 125,
125, 235, 220, 189, 147, 125, 125, 125, 256, 234, 202, 161, 125,
125, 125, 234, 214, 181, 136, 125, 125, 125])</code>
With this scope, the MIP model run under 10 minutes on my macbook.
- Where did I get that estimated occupancies? I run LP model on 5000x10 preference cost CONTINOUS variables (from 0 to 1) + 100x176x176 accounting cost BINARY variables just like the guide from <a href="https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/120764">https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/120764</a> and added these quad constraints, zs is mine 100x176x176 accounting cost variables:
<code>
for i in range(0, N_DAYS, 2): ## only need this for every two days
for u in range(0, 176, 2): # every two occupancies
for v in range(0, 176): <br>
two_vars = zs[i][u, v] + zs[i][u+1, v] <br>
m.addConstr(two_vars*two_vars==two_vars)
</code>
These quad constraints are need for LP only, in MIP model we don't need it. In the LP model, they force the relaxation to use two adjacent occupancies for a day. This way, the LP model is not optimistic about the accounting cost in the relaxation. You don't need to run the LP model to the end, just stop it when a 67xxx solution has been found, about 25 minutes on my macbook.
- The solution sounds easy and fast, but it took me months to find out. Congratulation and thank you everyone, I can't get this score without refer to discussions in this great competition. </p> |
Google QUEST Q&A Labeling | 22nd place solution(NLP beginner) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks kaggle for holding a great competition!</p>
<p>I'm novice at NLP, so I learned a lot from great kernels about preprocessing, how to build Bert model, etc...
My code is based on <a href="https://www.kaggle.com/idv2005/pytorch-bert-baseline">this kernel</a> by @idv2005 and <a href="https://www.kaggle.com/adityaecdrid/pytorch-bert-end-to-end-with-cv">this kernel</a> by @adityaecdrid ! Thank you very much!</p>
<p>Here is the summary of my work.(Sorry for poor English.)</p>
<h1>Solution</h1>
<ul>
<li>bert-base-cased, bert-base-uncased, xlnet-base-uncased and Twin models(XLNet, Bert).</li>
<li>Twin model idea is from <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a>. thanks a lot!. Twin models have Bert for Question and Bert for Answer and treat sequence output with 3 custom layers.
<ol><li>Average pooling for QBert/ABert => concatenate(shape=768*2) => dense(30)</li>
<li>Average pooling for QBert => dense(22), Average pooling for ABert => dense(8), concatenate QBert dense output and ABert dense output.</li>
<li>concatenate QBert/ABert sequence output(shape=512, 768*2) => Transformers => average pooling => dense(30)</li></ol></li>
<li>Final output: a * 0.25 + b * 0.25 + c * 0.5</li>
<li>postprocess (thank you for <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">this discussion</a>)
```
best_bin_dict = {'question_type_spelling': 4, 'question_type_instructions': 8, 'question_type_entity': 9, 'question_type_definition': 8, 'question_type_consequence': 4, 'question_type_compare': 8, 'question_type_choice': 8, 'question_opinion_seeking': 50, 'question_not_really_a_question': 4, 'question_multi_intent': 9, 'question_interestingness_self': 9, 'question_interestingness_others': 100, 'question_has_commonly_accepted_answer': 5, 'question_fact_seeking': 10, 'question_expect_short_answer': 50, 'question_conversational': 8, 'question_body_critical': 100, 'question_asker_intent_understanding': 100, 'answer_well_written': 9, 'answer_type_procedure': 50, 'answer_type_instructions': 9, 'answer_relevance': 100, 'answer_level_of_information': 50}</li>
</ul>
<p>df_sub = pd.read_csv("../input/google-quest-challenge/sample_submission.csv")
pred_final = np.array(preds).sum(axis=0)
for i, col in enumerate(df_sub.columns[1:]):
df_sub[col] = pred_final[:, i]
not_type_spelling_idx = df_test.query("host not in ['ell.stackexchange.com', 'english.stackexchange.com']").index</p>
<p>for col in df_sub.columns[1:]:
if col in best_bin_dict:
n_bins = best_bin_dict[col]
binned = pd.cut(df_sub[col].values, n_bins, retbins=True, labels=np.arange(n_bins)/(n_bins-1))[0]
if col == "question_type_spelling":
binned[not_type_spelling_idx] = 0
df_sub[col] = binned
df_sub.to_csv("submission.csv", index=False)
```</p>
<h1>Not worked</h1>
<ul>
<li>DistilBert</li>
<li>bert-large-cased, bert-large-uncased, bert-large-uncased-whole-masking</li>
<li>Categories/hosts Embedding</li>
<li>L1loss, L2Loss, Focal Loss (Finally I use BCELoss... but I can't understand why BCELoss is best unless it is not symmetric in this case... refer=> <a href="https://www.kaggle.com/mavillan/understanding-the-binary-cross-entropy">https://www.kaggle.com/mavillan/understanding-the-binary-cross-entropy</a>)</li>
<li>Subtask: predict categories.</li>
<li>Concat CLS token sequence and average pooling.</li>
</ul>
<h1>Others</h1>
<ul>
<li>I consumed about $200 for this competition.</li>
</ul> |
Deepfake Detection Challenge | 1st Place Removed Solution - All Faces Are Real Team | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Kaggle, Facebook Host Team and Fellow Competitors:</strong></p>
<p>First of all, we want to put on record our gratitude for Kaggle and the Facebook host team for putting the effort into creating the dataset and hosting this competition, and we give our congratulations to all eventual prize winners.</p>
<p>We'd like to use this statement to further explain the circumstances which led to our winning solution being voided, and our position on the LB being moved with accordance to our second solution. </p>
<p>In anticipation of shake-up of the competition on the private LB, we prepared our two solutions which finished with private LB scores 0.42320 and 0.44531 respectively. For the 0.44531 solution, which scored better on the public LB, we used competition data only and an unweighted mean of 12 models: this is the solution that enabled us to retain our 7th position on the LB. For our original winning solution (0.42320) we mixed 6 models trained using competition data with 9 models trained with some additional external data (our more adventurous submission).</p>
<p>For our original winning model, we used the following additional data:</p>
<ul>
<li><p><strong>The flickrface dataset</strong>: we used a <a href="https://www.kaggle.com/xhlulu/flickrfaceshq-dataset-nvidia-resized-256px">resized version</a> of this dataset. A few of these images had licenses which didn't allow commercial use, so in line with clarifications from Kaggle in the external data thread, we used the license information available from the original github to select and train <strong>only</strong> on images with license types that are acceptable for this competition (<a href="https://creativecommons.org/licenses/by/2.0/">CC-BY</a>, <a href="https://creativecommons.org/publicdomain/mark/1.0/">Public Domain Mark 1.0</a>, <a href="https://creativecommons.org/publicdomain/zero/1.0/">Public Domain CC0 1.0</a>, or <a href="http://www.usa.gov/copyright.shtml">U.S. Government Works</a>)</p></li>
<li><p><strong>Youtube videos images</strong>: we manually created a face image dataset from a handful of youtube videos with <a href="https://support.google.com/youtube/answer/2797468?hl=en-GB">CC-BY</a> license, which explicitly <a href="https://creativecommons.org/licenses/by/3.0/">allows for commercial use</a>.</p></li>
</ul>
<p>We chose these data sources with the belief that they met the rules on external data, specifically that external data must be <em>"available to use by all participants of the competition for purposes of the competition at no cost to the other participants"</em>, and the additional statements in the external data thread that they must be available for commercial use and not restricted to academics etc.</p>
<p>However, in our discussions with Facebook and Kaggle, we were told that despite fulfilling this we were contravening the rules on Winning Submission Documentation:</p>
<blockquote>
<p>WINNING SUBMISSION DOCUMENTATION (Section 4 of the Competition-Specific Rules)
In addition to compliance with the Kaggle Documentation Guidelines at <a href="https://www.kaggle.com/WinningModelDocumentationGuidelines">https://www.kaggle.com/WinningModelDocumentationGuidelines</a>, the winning submission documentation must conform with the following guidelines:</p>
<p>A. If any part of the submission documentation depicts, identifies, or includes any person that is not an individual participant or Team member, you must have all permissions and rights from the individual depicted, identified, or included and you agree to provide Competition Sponsor and PAI with written confirmation of those permissions and rights upon request.</p>
<p>B. Submission documentation must not infringe, misappropriate, or violate any rights of any third party including, without limitation, copyright (including moral rights), trademark, trade secret, patent or rights of privacy or publicity.</p>
</blockquote>
<p><strong>Specifically, we were asked to provide "additional permissions or licenses from individuals appearing in [our] external dataset"</strong>. Unfortunately, since the data was from public datasets, we didn't have specific written permission from each individual appearing in them, nor did we have any way of identifying these individuals. We didn't realise while competing that external data in this competition falls under 'documentation' as well as the external data rules, so we did not secure these permissions from individuals depicted above and beyond the licensing requirements. </p>
<p>We suspect that most competitors also did not realise these additional restrictions existed - we are unable to find any data posted in the External Data Thread which meets this threshold with a brief scan. During the competition, the rules on external data were repeatedly clarified, so this leaves us wondering why Kaggle never took the opportunity to clarify that external data must additionally follow the more restrictive rules for winning submission documentation.</p>
<p>An additional concern brought to us was that <strong>Facebook felt some of our external data "clearly appears to infringe third party rights" despite being labelled as CC-BY</strong> (it's not clear what data they were referring to specifically). Even if this were the case, it seems unreasonable to us that a Kaggle team should have to trace and verify that someone who publishes a dataset themselves has the rights to do so, and that we should have to engage rights clearance services in order to make a competition submission - it was suggested to us that we could have run our external data past our lawyer before making our submissions.</p>
<p><strong>While we feel that these extra rules could have been made clear during the competition, and we hope that Kaggle will begin to clarify these rules in future competitions, we understand that there is little we can do in this instance.</strong> We have had a constructive call with both Kaggle and Facebook which we thank them for. After this call, it was agreed that because we did not knowingly seek to undermine any rules, that our submission that did not use any external data should be allowed to remain and only the winning submission is to be disqualified.</p>
<p>That being said, we are very disappointed by this outcome after spending so many months on the competition. <strong>Successful Kaggle competitions rely on a trust between competitors and Kaggle that the rules will be fairly explained and applied, and this trust has been damaged.</strong> We welcome any thoughts from the community on this matter.</p>
<p>Giba, Mikel, Yifan, Gary and Qishen <br>
All Faces are Real</p> |
Deepfake Detection Challenge | 67th Place Solution: Kaggle Kernals Only + Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is a brief write-up, if you want to you can read my full ramblings here along with my code: <a href="https://github.com/GreatGameDota/Deepfake-Detection">https://github.com/GreatGameDota/Deepfake-Detection</a></p>
<h1>TLDR: Ensemble for the win!</h1>
<p>Once again an amazing Kaggle competition that I had a ton of pleasure participating in! I will like to thank Kaggle and the hosts for putting together this competition and working hard to make sure it went as smooth as it did!</p>
<p>I will also like to shout out some great people who helped me a lot in the competition:</p>
<p>Thanks to <a href="/phunghieu">@phunghieu</a> Hieu Phung for his amazing dataset that I used which can be found here: <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/128954#736780">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/128954#736780</a></p>
<p>Thanks to <a href="/unkownhihi">@unkownhihi</a> Shangqiu Li and team for all of the things they shared over the course of the competition. Most importantly the Mobilenet face extractor which can be found here: <a href="https://www.kaggle.com/unkownhihi/mobilenet-face-extractor-helper-code">https://www.kaggle.com/unkownhihi/mobilenet-face-extractor-helper-code</a></p>
<p>Thanks to <a href="/humananalog">@humananalog</a> for his starter code notebook: <a href="https://www.kaggle.com/humananalog/binary-image-classifier-training-demo">https://www.kaggle.com/humananalog/binary-image-classifier-training-demo</a></p>
<h2>Models</h2>
<ul>
<li>Xception</li>
<li>EfficientNet B1</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3543139%2Fdf0bb9e27b1053f0638883ba4d0422a5%2Fmodel.png?generation=1587741613868657&alt=media" alt=""></p>
<h2>Dataset</h2>
<p>Image resolution: 150x150
Mobilenet was used to extract faces for training and inference
The dataset I used is public and can be found here: <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134420">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134420</a></p>
<h2>Augmentation</h2>
<ul>
<li>Horizontal flip</li>
<li>Jpeg Compression</li>
<li>Downscale</li>
</ul>
<h2>Training</h2>
<ul>
<li>Adam optimizer</li>
<li>20 epochs</li>
<li>Binary CrossEntropy Loss function</li>
<li>ReduceLROnPlateau</li>
<li>Save checkpoint based on validation LogLoss</li>
<li>Folder-wise Validation split: 0-40 train, 41-49 validation</li>
<li>Simple undersampling was used to balance the data</li>
<li>1 frame was randomly picked from every video while training</li>
</ul>
<h2>Ensembling/Blending</h2>
<p>I ensembled all the models using the geometric mean of their probabilities.</p>
<h2>Final Submission</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3543139%2F63e59bfe9c96f5014bd4c42bab52849c%2Fscores.png?generation=1587741641643531&alt=media" alt=""></p>
<h2>What didn't work</h2>
<ul>
<li>LRCN (CNN + RNN)</li>
<li>External data</li>
<li>Post-processing/Clipping</li>
</ul>
<h2>Final Thoughts</h2>
<p>This was another great Kaggle competition! I'm so happy I achieved such a great performance on a competition that a lot of people just skipped due to the amount of data. I'm also surprised I performed so well with only Kaggle kernals!</p>
<p>I'm so excited to get my second ever competition medal and I'm even more excited because it is my first silver medal! I now graduate to Kaggle competition expert!</p>
<h2>Some helpful Links</h2>
<p>My baseline kernal: <a href="https://www.kaggle.com/greatgamedota/xception-classifier-w-ffhq-training-lb-537">https://www.kaggle.com/greatgamedota/xception-classifier-w-ffhq-training-lb-537</a>
My submission kernal: <a href="https://www.kaggle.com/greatgamedota/69th-place-solution-mobilenet-inference/">https://www.kaggle.com/greatgamedota/69th-place-solution-mobilenet-inference/</a>
Image dataset kernal: <a href="https://www.kaggle.com/greatgamedota/deepfake-detection-full-data">https://www.kaggle.com/greatgamedota/deepfake-detection-full-data</a></p> |
Deepfake Detection Challenge | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Keep it simple</h1>
<p>I used a frame-by-frame classification approach as many other competitors did.
Tried a lot of other complex things but in the end it was better to just use a classifier.</p>
<h3>Data preparation</h3>
<ul>
<li>extracted boxes and landmarks with MTCNN and saved them as json</li>
<li>extracted crops in original size and saved them as png</li>
<li>extracted SSIM masks with difference between real and fake and saved them as png</li>
</ul>
<h3>Face-Detector</h3>
<p>I used simple MTCNN detector.
Input size for face detector was caluclated for each video depending on video resolution.
- 2x resize for videos with less than 300 pixels wider side
- no resize for videos with wider side between 300 and 1000
- 0.5x resize for videos with wider side > 1000 pixels
- 0.33x resize for videos with wider side > 1900 pixels</p>
<h3>Input size</h3>
<p>As soon as I discovered that EfficientNets significantly outperform other encoders I used only them in my solution.
As I started with B4 I decided to use "native" size for that network (380x380).
Due to memory costraints I did not increase input size even for B7 encoder.</p>
<h3>Margin</h3>
<p>When I generated crops for training I added 30% of face crop size from each side and used only this setting during the competition.</p>
<h3>Encoders</h3>
<p>I tried multiple variants of efficient nets at the beginning:</p>
<ul>
<li>solo B3 (300x300) - 0.29 public</li>
<li>solo B4 (380x380) - 0.27 public</li>
<li>solo B5 (380x380) - 0.25 public</li>
<li>solo B6 (380x380) - 0.27 public (surprisingly it was worse than B5 and I have not tried B7 until competition last week)</li>
<li>solo B7 (380x380) - 0.24 public </li>
</ul>
<p>In the end I used two submits with:
- 15xB5 (different seeds) with heursitic overfitted for Public LB which was trained with standard augmentations - 10th place on private
- 7xB7 (different seeds) with more conservative avergaing heursitic and trained with hardcore augmentations - 3rd place on private </p>
<h3>Averaging predictions:</h3>
<p>I used 32 frames for each video.
For each model output instead of simple averaging I used the following heuristic which worked quite well on public leaderbord (0.25 -> 0.22 solo B5).</p>
<p><code>
def confident_strategy(pred, t=0.87):
pred = np.array(pred)
size = len(pred)
fakes = np.count_nonzero(pred &gt; t)
if fakes &gt; size // 3 and fakes &gt; 11:
return np.mean(pred[pred &gt; t])
elif np.count_nonzero(pred &lt; 0.2) &gt; 0.6 * size:
return np.mean(pred[pred &lt; 0.2])
else:
return np.mean(pred)
</code></p>
<p>I.e. I used only confident predictions for averaging if they passed some thresholds.
Though it worked well on public leaderbord to be safe for the second submit (3rd private) I used more conservative thersholds for real videos. </p>
<h3>Validation strategy</h3>
<p>I used 0-2 folders as holdout at the beginning.
But 400 videos from the public test had more correlation with Public LB and I switched to this approach.</p>
<p>I tracked two log loss metrics (on averaged probabilities per video)
- logloss on real videos
- logloss on fake videos </p>
<p>From the validation I can say logloss on FAKE videos was much lower than on REAL. I.e. fakes were too easy to spot.
Which was not encouraging. I guess public leaderboard also has this characteristic as my validation had strong correlation with public LB.
It is not the case on private set as it would contain new/different FaceSwap methods. </p>
<h3>Augmentations</h3>
<p>I used image compression, noise, blur, resize with different interpolations, color jittering, scaling and rotations
<code>
def create_train_transforms(size=380):
return Compose([
ImageCompression(quality_lower=60, quality_upper=100, p=0.5),
GaussNoise(p=0.1),
GaussianBlur(blur_limit=3, p=0.05),
HorizontalFlip(),
IsotropicResize(max_side=size)
PadIfNeeded(min_height=size, min_width=size, border_mode=cv2.BORDER_CONSTANT),
OneOf([RandomBrightnessContrast(), FancyPCA(), HueSaturationValue()], p=0.7),
ToGray(p=0.2),
ShiftScaleRotate(shift_limit=0.1, scale_limit=0.2, rotate_limit=10, border_mode=cv2.BORDER_CONSTANT, p=0.5),
]
)
</code></p>
<h3>Generalization approach</h3>
<p>I expected it won't be enough for generalization and spent a few weeks working on domain specific augmentations. </p>
<p>I wanted to push models to learn the following properties:
- visual artifacts (models learn that easily even without any augmentations)
- different encoding of face from other part of the image. Big margin helps with that.
- face warping artifacts. Big marging helps with that as well. Related article <a href="https://arxiv.org/abs/1811.00656">https://arxiv.org/abs/1811.00656</a> to catch FWA
- blending artifacts - here we need either to predict blending mask (<a href="https://arxiv.org/abs/1912.13458">https://arxiv.org/abs/1912.13458</a>) but it's not possible to obtain ground truth mask as there is no big difference in pixels/SSIM on the edge due to blurring and other techniques used to reduce blending artifacts or come up with augmentations that <strong>destroy</strong> visual artifacts.</p>
<p>To catch face blending artefacts:
1. removed half face horisontally or vertically. Used dlib face convex hulls.
2. blacked out landmarks (eyes, nose or mouth). Used MTCNN landmarks for that.
3. blacked out half of the image. To be safe I checked that it will not delete highly confident difference from masks generated with SSIM. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F534152%2Fcd745d9fb666f2316ac699165d629c4f%2Faugmentations.jpg?generation=1587717153025245&alt=media" alt=""></p>
<p>I don't really know if it helped on private leaderboard though.</p>
<h3>Training schedule</h3>
<ul>
<li>sampled fake crops based on number of real crops i.e. <code>fakes.sample(n=num_real, replace=False, random_state=seed)</code></li>
<li>2500 iterations per epoch</li>
<li>SGD, momentum=0.9, weight decay=1e-4</li>
<li>PolyLR with 0.01 starting LR</li>
<li>75k iterations</li>
<li>used Apex with mixed precision</li>
<li>trained on 4 GPUs with SyncBN and DDP. Batch size 16x4 for B5, 12x4 for B7.</li>
<li>label smoothing - for me label smoothing with 0.01 eps was optimal on public leaderboard. Also it allowed not to use clipping at all. </li>
</ul>
<h3>Hardware</h3>
<p>Unfortunately I did not ask aws credits from organizers, hoped that my home workstations would be enough - I was wrong!
I have two devboxes: one with 2xTitan Vs, the other with 4xTitan Vs</p>
<p>Huge thanks to hostkey provider (<a href="https://www.hostkey.com/">https://www.hostkey.com/</a>) that gave me a grant <a href="http://landing.hostkey.com/grants?_ga=2.3307699.2051560741.1587714719-1038061670.1587714719">http://landing.hostkey.com/grants?_ga=2.3307699.2051560741.1587714719-1038061670.1587714719</a> !
I got a devbox with 4x2080Ti for two months! Additional 4 GPUs helped me a lot to iterate faster as my models required all of them for training a single model. </p>
<h3>Things that I tried but did not work well enough</h3>
<ul>
<li>Metric learning - was worse than a simple classificator</li>
<li>UNet with SSIM difference prediction - got the same LB score.</li>
<li>Self supervision with blending real faces. Extracted convex hull for real faces warped/resized/compressed face and then blended it back and changed label to fake. Models learned that quickly but that did not give any boost on validation/public leaderboard.</li>
<li>Self supervision with rotations like here <a href="https://stanford-cs221.github.io/autumn2019-extra/posters/110.pdf">https://stanford-cs221.github.io/autumn2019-extra/posters/110.pdf</a> - did not improve my score</li>
<li>Temperature scaling - risky, no big difference. Decided not to use it. </li>
<li>Using steganalysis. Did not imporve validation score.</li>
<li>RNN - I guess it would work, but due to CPU bottleneck it would be hard to extract multiple chunks in kernel. My first model with RNN + Resnet34 scored around 0.4 on public and a simple classifier was better. </li>
</ul>
<h3>No External data</h3>
<p>That was a huge concern for me because nothing is basically allowed. I decided to be safe and did not use any external data.</p>
<h3>Github</h3>
<p><a href="https://github.com/selimsef/dfdc_deepfake_challenge">https://github.com/selimsef/dfdc_deepfake_challenge</a></p> |
Deepfake Detection Challenge | Our solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we congratulate the winners of the Deepfake Detection competition and also would like to thank every competitor who worked hard to solve the deepfakes problem. We learned a lot throughout this amazing journey with you all, and we hope to see you again in the upcoming Deepfake challenges.</p>
<p>Before intensely experimenting with deep neural network models, we decided to inspect the deepfake data with established computer vision techniques, to see if there's a simple cue which can reveal a deepfake, having these features in our mind for later feature-engineering purposes. Initially, we could observe indicators of deepfakes in frame inspections; such small artifacts were too hard to generalize for a novel solution. Although these experiments haven't resulted with great success, they provided a better understanding of deepfakes for us. Later on, we started experimenting with deep learnig based models, which then showed significant performance increase in detecting fake faces.</p>
<h2>Face Extraction</h2>
<p>For face extraction and processing purposes, we used FaceNet-Pytorch,</p>
<ul>
<li>Extracted landmarks and bounding boxes via MTCNN, then faces were cropped and saved in PNG format for high-quality image representation.</li>
<li><p>Also, we used InceptionResNET features to differentiate individuals in the same frame from each other.</p>
<h2>Data Preparation</h2></li>
<li><p><strong>Margin</strong>: After square cropping faces with respect to bounding boxes, we’ve scaled each face with “1.2”, which is determined by experimentation.</p></li>
<li><strong>Resize</strong>: Scaled faces resized to 224x224, with BICUBIC sampling.</li>
<li><strong>Augmentations</strong>: We’ve experimented with “kornia” and “albumentations”.</li>
<li>Used techniques such as Adaptive Histogram Equalization, Object Detection algorithms to increase detection robustness.</li>
<li>Fine-tuned MTCNN for robust and rapid face detection.</li>
<li>Used specific methods such as Compression, Edge-detection, Landmark Extraction, and Low-Quality to generate several datasets.</li>
<li>Additionally extracted features such as Quality Metrics, Optical Flow, FFT, DoG, Glcm, Lbp</li>
</ul>
<h2>Augmentations and generalization</h2>
<p>During our experiments, we have used different levels of augmentation techniques to increase robustness towards uncertainty in the wild. </p>
<pre><code>def chain_aug(img):
return A.Compose([A.HorizontalFlip(p=1),
A.JpegCompression(quality_lower=70, quality_upper=100, p=1.),
A.Blur(blur_limit=7, p=1.),
A.IAASharpen(alpha=(0.2, 0.5), lightness=(0.5, 1.0), p=1.)
], p=1)(image=img)['image']
</code></pre>
<hr>
<pre><code>def light_aug(img):
return A.Compose([A.HorizontalFlip(p=1),
A.RandomBrightnessContrast(p=1),
A.RandomGamma(p=1),
A.CLAHE(p=1), ], p=1)(image=img)['image']
</code></pre>
<hr>
<pre><code>def draw_ellipse(img):
img = Image.fromarray(img)
draw = ImageDraw.Draw(img)
w, h = img.size
cx, cy = w//2, h//2
x0 = np.random.randint(0, cx)
y0 = np.random.randint(0, cy)
x1 = np.random.randint(cx, w)
y1 = np.random.randint(cy, h)
draw.ellipse((x0, y0, x1, y1), fill='black')
return np.asarray(img)
</code></pre>
<h2>Models</h2>
<p>Early studies preferred backbones such as <strong>ResNet</strong>, <strong>Xception</strong>, <strong>VGG</strong> whereas we have updated our backbone choice as <strong>EfficientNet</strong>. The main idea with the backbone network was to train a simple binary classifier and to discriminate between real and fake faces. Experiments for binary-classification models started from EfficientNet-b0 and scaled up to EfficientNet-b5. Here’s a couple of things we’ve tried.
- Low res effnet-b3 + Effnet-b5 (224x224) 0.30656
- Effnet-b4 0.38793 (224x224)
- Effnet-b2 0.43068 (224x224)
- Triplet loss models (224x224)
- An ensemble of XGB, LGB, CATBoost, KNN models</p>
<p>For final submissions:
- An ensemble of EfficientNets: 0.31499 [<strong>FAILED</strong>] (Although scripts worked Kaggle submissions, as well as tests, sadly, our best performing model failed on private board scoring.)
- Ensembles of EfficientNets + Other models mentioned above: 0.33552</p>
<h2>Training schedule</h2>
<p>In our development environment, we used Nvidia-docker, with a 4x 2080TI Lambda workstation.</p>
<ul>
<li>A balanced sampling of reals and fakes from the dataset.</li>
<li>Batch size of 16 per GPU (4x), 625 iterations per epoch. </li>
<li>12,500 iterations in total.</li>
<li>Adam with weight_decay =10e-6</li>
<li>OneCycleLR, with lr set to min 3e-6 and max 3e-4</li>
<li>FP16 training</li>
</ul>
<h2>Validation</h2>
<p>At first, like many competitors, we measured our performance with 400 public videos, after some time, we switched to folders 0-1-2. Later on, our team developed a validation split strategy based on similarity scores. </p>
<p>During the competition, we were tracking;
- <strong>Log loss</strong> score for each class
- Accuracy, confusion matrix scores
- Error, gradient map analysis with custom debug scripts</p>
<h2>Things that didn’t work</h2>
<ul>
<li>Experimented with masks with different models such as U-Net.</li>
<li>SVM, Decision tree algorithms used EfficientNet bottleneck features as well as flatten face images but we didn’t see any improvement in results.</li>
<li>LightGBM with Quality Metrics dataset.</li>
<li>SVM with DFT, mentioned in Unmasking DeepFake with simple Features</li>
<li>LightGBM with DoG + GLCM + LBP</li>
<li>Temporal Model with LSTM, for capturing temporal inconsistency</li>
<li>Optical flow features, however, the DFDC dataset was too noisy for robust optical flow estimations.</li>
<li>We experimented with an idea where we subtract frame n from n+1 </li>
</ul>
<p>Through the DFDC journey, we have had many challenges of generalizing our models to deepfakes in the wild. Training with only the DFDC dataset, our models resulted in high performance with "face swap" auto-encoder based generation techniques. Still, they failed to generalize other variations of deepfakes and generation techniques. We are very excited for future contributions, and we are happy to announce a deployed version of our solution at <a href="https://deepware.ai">https://deepware.ai</a>. Once again, we would like to thank the amazing Kaggle community and especially the Facebook team and everyone who worked hard on the Deepfakes problem to provide novel solutions.</p> |
Deepfake Detection Challenge | 3rd place solution and code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Summary</h1>
<p>My solution consists of three EfficientNet-B7 models (I used the Noisy Student pre-trained weights). I did not use external data, except for pre-trained weights. One model runs on frame sequences (a 3D convolution has been added to each EfficientNet-B7 block). The other two models work frame-by-frame and differ in the size of the face crop and augmentations during training. To tackle overfitting problem, I used mixup technique on aligned real-fake pairs. In addition, I used the following augmentations: AutoAugment, Random Erasing, Random Crops, Random Flips, and various video compression parameters. Video compression augmentation was done on-the-fly. To do this, short cropped tracks (50 frames each) were saved in PNG format, and at each training iteration they were loaded and reencoded with random parameters using ffmpeg. Due to the mixup, model predictions were “uncertain”, so at the inference stage, model confidence was strengthened by a simple transformation. The final prediction was obtained by averaging the predictions of models with weights proportional to confidence. The total training and preprocessing time is approximately 5 days on DGX-1.</p>
<h1>Model and code</h1>
<p>You can use my <a href="https://github.com/NTech-Lab/deepfake-detection-challenge">code</a> to reproduce the result.</p>
<h1>Key ingredients</h1>
<h2>Mixup on aligned real-fake pairs</h2>
<p>One of the main difficulties of this competition is a severe overfitting. Initially, all models overfitted in 2-3 epochs (the validation loss started to increase). The idea, which helped a lot with the overfitting, is to train the model on a mix of real and fake faces: for each fake face, we take the corresponding real face from the original video (with the same box coordinates and the same frame number) an do a linear combination of them. In terms of tensor it’s <br>
<code>python
input_tensor = (1.0 - target) * real_input_tensor + target * fake_input_tensor
</code>
where target is drawn from a Beta distribution with parameters alpha=beta=0.5. With these parameters, there is a very high probability of picking values close to 0 or 1 (pure real or pure fake face). You can see the examples below: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2Fb2c49a9ba741584da43e34306ea65505%2Fmixup_example.jpg?generation=1592040431827143&alt=media" alt="">
Due to the fact that real and fake samples are aligned, the background remains almost unchanged on interpolated samples, which reduces overfitting and makes the model pay more attention to the face.</p>
<h2>Video compression augmentation</h2>
<p>In the paper [1] it was pointed out that augmentations close to degradations seen in real-life video distributions were applied to the test data. Specifically, these augmentations were (1) reduce the FPS of the video to 15; (2) reduce the resolution of the video to 1/4 of its original size; and (3) reduce the overall encoding quality. In order to make the model resistant to various parameters of video compression, I added augmentations with random parameters of video encoding to training. It would be infeasible to apply such augmentations to the original videos on-the-fly during training, so instead of the original videos, cropped (1.5x areas around the face) short (50 frames) clips were used. Each clip was saved as separate frames in png format. An example of a clip is given below: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F621e97ea700ccd2e6930ca18835936f0%2Fclip_example.jpg?generation=1592040594526237&alt=media" alt="">
For on-the-fly augmentation, ffmpeg-python was used. At each iteration, the following parameters were randomly sampled (see [2]):
- FPS (15 to 30)
- scale (0.25 to 1.0)
- CRF (17 to 40)
- random tuning option</p>
<h2>Model architecture</h2>
<p>As a result of the experiments, I found out that the EfficientNet models work better than others (I checked ResNet, ResNeXt, SE-ResNeXt). The best model was EfficientNet-B7 with Noisy Student pre-trained weights [3]. The size of the input image is 224x192 (most of the faces in the training dataset are smaller). The final ensemble consists of three models, two of which are frame-by-frame, and the third works on sequence.</p>
<h3>Frame-by-frame models</h3>
<p>Frame-by-frame models work quite well. They differ in the size of the area around the face and augmentations during training. Below are examples of input images for each of the models: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F6b85a72e94d2d33b9f92393ff1ab702a%2Ffirst_and_second_model_inputs.jpg?generation=1592040647615387&alt=media" alt=""></p>
<h3>Sequence-based model</h3>
<p>Probably, time dependencies can be useful for detecting fakes. Therefore, I added a 3d convolution to each block of the EfficientNet model. This model worked slightly better than similar frame-by-frame model. The length of the input sequence is 7 frames. The step between frames is 1/15 of a second. An example of an input sequence is given below: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2Fb92e6241ef295c6f719fd4987144c58f%2Fthird_model_input.jpg?generation=1592040677053674&alt=media" alt=""></p>
<h2>Image augmentations</h2>
<p>To improve model generalization, I used the following augmentations: AutoAugment [4], Random Erasing, Random Crops, Random Horizontal Flips. Since I used mixup, it was important to augment real-fake pairs the same way (see example). For a sequence-based model, it was important to augment frames that belong to the same clip in the same way. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F1a8e42e972ea9a755cd29ef49217ffb2%2Faugmented_mixup.jpg?generation=1592040717314961&alt=media" alt=""></p>
<h2>Inference post-processing</h2>
<p>Due to mixup, the predictions of the models were uncertain, which was not optimal for the logloss. To increase confidence, I applied the following transformation: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F747b96390d72602fec15438d627e92ed%2Fpred_transform.jpg?generation=1592040745015626&alt=media" alt="">
Due to computational limitations, predictions are made on a subsample of frames. Half of the frames were horizontally flipped. The prediction for the video is obtained by averaging all the predictions with weights proportional to the confidence (the closer the prediction to 0.5, the lower its weight). Such averaging works like attention, because the model gives predictions close to 0.5 on poor quality frames (profile faces, blur, etc.). </p>
<h2>References</h2>
<p>[1] Brian Dolhansky, Russ Howes, Ben Pflaum, Nicole Baram, Cristian Canton Ferrer, “The Deepfake Detection Challenge (DFDC) Preview Dataset” <br>
[2] <a href="https://trac.ffmpeg.org/wiki/Encode/H.264">https://trac.ffmpeg.org/wiki/Encode/H.264</a> <br>
[3] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le, “Self-training with Noisy Student improves ImageNet classification” <br>
[4] Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, Quoc V. Le, “AutoAugment: Learning Augmentation Policies from Data”</p> |
Deepfake Detection Challenge | 43rd place private LB solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is a brief description of the solution that led us to the 43rd place on the private LB!
The core idea is described in our paper "Video Face Manipulation Detection Through Ensemble of CNNs" available on <a href="https://arxiv.org/pdf/2004.07676.pdf">arXiv</a>. To reproduce the paper results, please refer to our <a href="https://github.com/polimi-ispl/icpr2020dfdc">GitHub repository</a>. Kaggle notebook for inference is also now available <a href="https://www.kaggle.com/nicobonne/43-rank-ispl-ensamble-10-model">here</a>.</p>
<h1>Model</h1>
<p>We started from EfficientNetB4 and we tweaked it a bit, by adding an attention mechanism in the middle of its convolutional blocks chain. We call it EfficientNetB4Att.</p>
<h1>Training</h1>
<p>We resorted to two different training paradigm, end-to-end and Siamese training with triplet loss, both considering frames as samples.</p>
<p>We then trained 5 instances for each model in a 5-fold strategy based on folders. For each fold, we selected 40 consecutive folders for training and the remaning 10 for validation. There was no overlap between the folds.
By doing this we ended up with 10 models, 5 trained in a end-to-end fashion and 5 trained in a siamese fashion.</p>
<p>During training, we considered only frames with one face, keeping the best face found by Blazeface.
As augmentation we used addictive noise, change of saturation, brightness, downscale and jpeg
compression. We trained using Adam for 20k iterations max (more details on this in the <a href="https://arxiv.org/pdf/2004.07676.pdf">paper</a>).</p>
<h1>Inference</h1>
<p>At inference time, we considered 72 frames per video and looked at all the faces found by Blazeface, keeping only those with a score above a certain threshold. In case a frame had more than one face above the threshold, but with discordant scores, we took the maximum score above them. The rationale is that if we have multiple faces and just one face is fake, we want to classify the frame as fake. We then averaged the scores of all the networks, the scores of all the frame of the video and computed the sigmoid.
You can look into the inference code in details <a href="https://www.kaggle.com/nicobonne/43-rank-ispl-ensamble-10-model">here</a>.</p>
<p>What a journey! Big big thanks to all my teammates from <a href="http://ispl.deib.polimi.it/">Image and Sound Processing Lab (ISPL)</a> of Politecnico di Milano.
@unklb197 @edoardodanielecannas @saramandelli @bestagini</p> |