Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
M5 Forecasting - Accuracy
59th place solution - there is nothing special
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to express my sincere gratitude to Kaggle, University of Nicosia and Walmart for hosting the competition. This competition was really amazing and fun and I learned a lot.</p> <h2>Feature Engineering</h2> <p>I used <a href="https://www.kaggle.com/kyakovlev/m5-simple-fe">M5 - Simple FE</a> datasets as baseline and added below featues.</p> <ul> <li>the day before holiday</li> <li>the day after holiday</li> <li>lag feature of sales - latest 28 days </li> <li>rolling window aggregation feature of sales - sum, max, min and mean with rolling [ 7 14 30 60 ]</li> <li>Average sales for 4 weeks going back every 7 days (feature of the day of the week)</li> <li>Average sales for 8 weeks going back every 7 days (feature of the day of the week)</li> <li>Average sales for 12 weeks going back every 7 days (feature of the day of the week)</li> </ul> <p>Just all. Quite simple.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1110061%2Ffcdc0559ad488bee843d3d3f683309f7%2Ffeature_importance.png?generation=1593670446657269&amp;alt=media" alt=""></p> <h2>Model</h2> <p>Using only single LightGBM model.</p> <p>My PC environment is ordinal standard PC spec with 16G memory and intel i5-3470 3.2GHz CPU (no GPU). As reason of not sufficient memory I couldn't use whole datasets, I used train data set using March, April, May, June in 2011-2015 and all 2016 data. I created model each 28 predict days and each 3 category (foods, hobby and household), finally I created total 28*3=84 models.</p> <p>Final trained model included validation period data 4/25(d1914) to 5/22(d1941).</p> <h2>Ensembling</h2> <p>No ensemble. (single LightGBM model)</p> <h2>Post-Processing</h2> <p>I tried out of stock prediction as extracting all 10 stores or 9 stores that have continued sales 0 in the day before the forecast. I couldn't determine how long out of stock continues, so I gave up out of stock prediction. There were some out of stock items, I decided that FOODS_2_242 was out of stock, filled sales=0 and created a submission.</p> <p>With this post-processing, score seems to be slightly improved to 0.58026 -&gt; 0.57987.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1110061%2F4470decff7c1a1cbf51da08e38aebb35%2FFOODS_2_242_lack.png?generation=1593670444594244&amp;alt=media" alt=""></p> <p>As you can see my solution, there is nothing really special or new. I just followed Konstantin( <a href="/kyakovlev">@kyakovlev</a> )'s kernel and discussion. It was something like standing on the shoulder of giants and reaching out a bit more.</p> <p>Thank you and congrats to all the winners!</p> <p>...</p> <p><a href="https://qiita.com/takaaki_inada/items/3f822737cf306a7bbce9">日本語版解法まとめ</a> </p>
M5 Forecasting - Accuracy
[109th place] No Dark Magic, Just ML
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Calendar</strong> 1) Uniting event features, count encode them with rerank, add <em>days to event</em> feature and <em>next event</em> indicator; 2) Dropping date features like <em>month</em>, <em>year</em> (they are inconsistent between train/test); 3) Adding only <em>day of the month</em> and <em>day of the week</em> features.</p> <p><strong>Sell prices</strong> 1) The only good <em>sell price</em> features were <em>sell price max pct</em> ((<em>sell price cumax</em> - <em>sell price</em>) / <em>sell price cumax</em>) and <em>sell price min pct</em> ((<em>sell price</em> - <em>sell price cumin</em>) / <em>sell price</em>), but I decide to not include them in final solution (my other kernel with them did a little bit better on private in the end); 2) No other features were used.</p> <p><strong>Demand</strong> 1) <em>Cumulative (mean and low median)</em>, <em>rolling (mean week/month/two months)</em> and <em>exponentially weighted (mean)</em> features with 29 day shift; 2) Uniting all <em>snap</em> features into one bool feature; 3) CatBoost encoding for <em>dept</em> (w rerank), <em>store</em> (w rerank), <em>item</em>, OHE for <em>state</em> and <em>category</em>; 4) Adding <em>CPI</em> values (external data, used census region values) with 2 month shift (CPI is usually updated in the middle of the next month); 5) <em>Expected item revenue</em> (CatBoost encoded <em>item</em> multiplied by <em>sell price</em>) was the only revenue feature that worked for me; 6) Cutting first year from the dataframe (to save RAM and get rid of nans from shifts).</p> <p><strong>Models</strong> 1) Basic Time Train-Test split (full year of sales in valid right before test months); 2) <em>PassiveAgressiveRegressor</em> to get that yearly increasing trend and add ability to extrapolate. Trained with MSE, tuned on validation, then weights were updated with validation df; 3) <em>CatBoost</em> with a lot of tuned hyper-parameters and optimizations for time series. Trained with RMSE, then trained model was updated with validation df; 4) Weighted mean of "relu activated" predictions in the submission (to get rid of negative values that could come from LR).</p> <p><strong>What did not work for me</strong> 1) Sell price features (noisy, different distribution in train and test, except maybe two, see above); 2) Revenue features (noisy, except one, see above); 3) Moon age/index (tried 5 different approaches and after checking results with historical data find a lot of mistakes), other date features (different distribution in train and test); 4) Bunch of basic statistics calculated on rolling/ewm/cumulative demand (min, max, q1, q3, std and so on); 5) Day-by-day prediction - you will always have some error in your prediction, and with this approach it will only grow with each prediction since your new prediction rely on previous. I guess it could work but with some serious validation, postprocessing and feature penalties. 6) Tagging new product or counting days of product being in sale (different distributions between train and test, also there are no new items in test); 7) Encoding year/quarter/month trends with mean/max demand; 8) Tagging and counting duplicates (it works but with good feature engineering you don't need it), tagging nans and outliers; 9) Gasoline price (raw and normalized by US price), number of employees (raw and normalized by state population). Bimodal (TX &amp; CA vs WI ) and inconsistent between train/test.</p> <p>Overall 108 kernel versions, 27 submissions, min private 0.59492, final solution private 0.60446, no multipliers or other cheeky stuff.</p> <p><strong>A bit altered code is <a href="https://www.kaggle.com/altprof/109th-place-code-only">here</a></strong></p>
Herbarium 2020 - FGVC7
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I would like to thank the host team and Kaggle for hosting such an interesting competition as well as my teammates (<a href="/dattran2346">@dattran2346</a>, <a href="/moewie94">@moewie94</a>) for their hard work and contributions.</p> <p><strong>Model Architecture</strong></p> <p>We used EfficientNet B3/4/5 (noisy student version) with original image sizes (320x320, 380x380, 456x456), Inception V4 and SEResNeXt50 with image size 448x448 as our backbones. For the EfficientNets, we first extracted feature maps from block 4, 5, and 6, forwarded through one-squeeze multi-excitation modules (<a href="https://eccv2018.org/openaccess/content_ECCV_2018/papers/Ming_Sun_Multi-Attention_Multi-Class_Constraint_ECCV_2018_paper.pdf">OSME</a>), before pooling and concatenating them into a "local" features vector. The ArcFace(s=16, m=0.1) loss + BNNeck combination was then used with "head" block features as input to help our models learn better "global" representation. This pipeline was trained jointly end-to-end with Cross Entropy losses (no label smoothing) and <a href="https://papers.nips.cc/paper/7344-maximum-entropy-fine-grained-classification.pdf">entropy losses</a>; at inference time, we averaged the 3 classifiers' logits (local classifier, ArcFace and BNNeck).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1134433%2Fb488481298423a5b4b3ecdbf549b1ad2%2Fmodel.png?generation=1590638315354457&amp;alt=media" alt=""></p> <p><strong>Training</strong></p> <p>We employed a two-stage training which is similar to 4th ranked team. In the first stage, only samples from the top 5000 most frequent categories were used. After obtaining a good model (measured in terms of validation f1-score), we employed the weight imprinting trick to initialize the classifiers' weights for the remaining 27093 categories then re-trained the entire network until convergence. Below are our training configs. + Stage 1: 75 epochs, batch size = 2048 (gradient accumulation), cosine scheduler with linear warmup and AdamW optimizer, balanced sampling (randomly sample 75 images per category per epoch). + Stage 2: Increase number of epochs to 105, balanced sampling (randomly sample 25 images per category per epoch). + Augmentations: <code>ResizeShorterSide(random.randint(img_size + 32, img_size + 128)), RandomCrop(img_size), RandAugment(num_ops=2, magnitude=random.randint(1, 11), RandomErasing()</code></p> <p><strong>Scores</strong> We reported some single model scores without any post-processing here + Inception V4 - Public 0.79712 - Private 0.77781 + B3 - Public 0.80570 - Private 0.78182 + B4 - Public 0.83014 - Private 0.81488 + B5 - Public 0.84774 - Private 0.83309</p> <p><strong>Ensemble</strong> We took the code and idea in this<a href="https://www.kaggle.com/mathormad/5th-place-solution-stacking-pipeline"> notebook</a> from last year RSNA challenge, modified them to build a simple 2-layer stacking network to combine predictions from our single models. The stacking ensemble obtained public score 0.85505 and private score 0.84180. ``` KERNEL_SIZE = 9 NUM_CHANNELS = 128</p> <p>class StackingCNN(nn.Module): def <strong>init</strong>(self, num_models, num_channels=NUM_CHANNELS): super(StackingCNN, self).<strong>init</strong>() self.base = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1, num_channels, kernel_size=(num_models, 1), padding=(int((num_models - 1) / 2), 0))), ('relu1', nn.ReLU(inplace=True)), ('conv2', nn.Conv2d(num_channels, num_channels, kernel_size=(1, KERNEL_SIZE), padding=(0, int((KERNEL_SIZE - 1) / 2)))), ('relu2', nn.ReLU(inplace=True)) ]))</p> <pre><code>def forward(self, images, labels=None): with autocast(): x = self.base(images) x = x.permute(0, 3, 1, 2) x = fast_global_avg_pool_2d(x) if self.training: losses = { "avg_loss": F.cross_entropy(x, labels) } return losses else: outputs = {"logits": x} return outputs </code></pre> <p>```</p> <p><strong>Postprocessing</strong> We exploited the fact that the test set has the number of examples per species capped at a maximum of 10. For each species, we first sorted the predictions based on their top 1 confidence scores. For predictions with confidence scores outside top 10, we changed them to top 2 category instead. This simple trick gave pretty consistent boost (0.2 - 0.3 %) on public LB for every single submission file. Our final submission scored 0.85797 on public and 0.84484 on private.</p>
M5 Forecasting - Accuracy
[84th place] No multiplier
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners! Looking forward to the write up.</p> <p>In this competition, I relied solely on my CV and my best score private LB score corresponds to my best CV - no regrets, playing it conservatively with no multiplier paid off somewhat as it got me my second silver medal and expert status :)</p> <h2>CV setup</h2> <p>6 Time Series folds: 250 weeks of training + 4 weeks validation</p> <h2>Model</h2> <ul> <li>LGBM</li> <li>6x4 models: 6 folds x 1 model per 7 days so that for the first 7 days I have 6 folds, the next 7 days another 6 folds etc</li> <li>Tweedie loss (I tried to build my own tweedie loss with different rho for each product but could not make it work due to how LGBM computes the <a href="https://github.com/microsoft/LightGBM/issues/3155" target="_blank">log link</a>)</li> <li>Gradient using weights equal to the level 12 aggregation weights</li> <li>Early stopping using WRMSSE loss. Thanks <a href="https://www.kaggle.com/sibmike" target="_blank">@sibmike</a> for your <a href="https://www.kaggle.com/sibmike/fast-clear-wrmsse-18ms" target="_blank">notebook</a>!</li> </ul> <h2>Feature engineering</h2> <h3>Common ideas - can be seen in most public notebooks</h3> <ul> <li>Calendar features: day of week, month, events, snap day as 0/1</li> <li>Price: normalized price (normalization done for each product separately), price momentums (ratio of today's price and yesterday's price + ratio of today's price over mean of last 7 days)</li> <li>Rolling averages of sales over last 7/14/28/52 days</li> <li>Rolling averages of sales over last 3/5 same day of week</li> </ul> <h3>Other ideas</h3> <ul> <li>Forward and backward lags on events and snap days so that for every row, I have not only the event that happens in the current day but as well the events that happened or will happen over a period of 3 days =&gt; significant boost in CV (0.05)</li> <li>Out of stock handling: I created a small algorithm to identify if a zero is due to out of stock or if it is just a regular no sales day. The model was then trained with this information (all rows with a out of stock flag would have zero sales). The inference is done by assuming no out of stock day which obviously creates a bias that is corrected in the post processing, see below =&gt; 0.15 boost in CV. <a href="https://www.kaggle.com/ismailbbm/out-of-stock-detection" target="_blank">Notebook for reference.</a></li> </ul> <h3>Discarded ideas - no improvements on my CV</h3> <ul> <li>Standard deviation, kurt, skew of sales</li> <li>Mean encoding on item_id, dept_id…</li> <li>Store wise models</li> </ul> <h2>Post-processing</h2> <ul> <li>Out of stock correction: to correct for the bias introduced by this feature, I have to try to predict out of stock zeroes in the inference. I am doing so by looking at the patterns of consecutive zeroes of each id. For example if a product has almost never zero sales and then for 2 consecutive days, it has a zero, we can assume that this is due to out of stock and that it is likely that the next days are zeros as well. <a href="https://www.kaggle.com/ismailbbm/out-of-stock-detection" target="_blank">Notebook</a></li> <li>Very small predictions (less than 0.1) are transformed to zero. To compensate, predictions between 0.9 and 1 are rounded to 1 =&gt; 0.03 boost in CV</li> <li>Did not use any kind of arbitrary multiplier as none improved my CV</li> </ul> <p>Anyway, I enjoyed working on this time series problem and I learned a lot from the community! Hopefully I can give back a bit.</p>
M5 Forecasting - Accuracy
[112th Place] How I missed Rank 2 of M5-Accuracy by a factor!!!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Congratulations to all the Kagglers and toppers of the M5-Accuracy challenge. It was a wonderful journey where I got to learn a lot from fellow colleagues and Kagglers. The topics discussed and kernels shared gave a good understanding of the problem statement and solutions proposed.</p> <p>From the start, it was clear that LGB was one of the best approaches to solve the problem. However, things were unclear regarding the use of WRMSSE, hierarchial modeling, and the need for a stable CV for the time series, to get to the top of LB. Hence, keeping the complexity of the problem in mind and, at the same time avoid overfitting, we used a standard and stable approach. We used LGB, with CV on future data (1.5 yrs), and retraining with learnt hyperparameters on the entire dataset.</p> <p>It was surprising to see, it led to us to Rank 112 in M5-Accuracy. However, soon after that, we learnt from the <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/163047">post</a> of <a href="/kyakovlev">@kyakovlev</a> , that a scaling factor of 0.955 made all the difference in the private LB. After we used the calibration on our submission file, it was even more shocking to learn, that it gave a private LB score of 0.52123 ….. Rank 2.</p> <p>Well, to sum up it up, the challenge reminded me back again, that the learning is not over yet.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F649545%2F2036f33e81d9f2bf45576300bc5b4e41%2Fkaggle.PNG?generation=1593622286701781&amp;alt=media" alt=""></p>
M5 Forecasting - Accuracy
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to appreciate the participants and organizers of the competition very much. I have learned a lot from this competition. My solution may not seem to be of much value, but I have gained a lot of insight from the community so I felt obligated to share the solution with you. (I especially learned a lot from @kyakovlev kernel. Thank you.)</p> <p>[Solution] - Model - LightGBM (single) - objective = tweedie - Validation - 5 holdout (d1578-d1605, d1830-d1857, d1858-d1885, d1886-d1913, d1914-d1941) - no early stopping - Model split - for each store - for each week - model w1 predicts F01, F02, ..., F07 - model w2 predicts F08, F09, ..., F14 - model w3 predicts F15, F16, ..., F21 - model w4 predicts F22, F23, ..., F28 - Features - General time-series features - General price features - General calendar features - No recursive features</p> <p>When the competition began, I first understood the data and evaluation metrics and created a baseline model. Then I realized that the validation scores varied significantly over the period of time. (I can't build a proper validation.) I was strongly convinced that this competition would be very difficult and that it would be impossible to build a model with high accuracy.</p> <p>So I decided to give up on trying to get a high ranking in the competition. Instead of it, I decided to try to build a "practical" solution. My strategy is as follows: - Not to use post processing, multiplier, and leaks - In practice, it is not possible to utilize such information, so I have decided that it should not be used. - Trust CV but Not care about official evaluation metrics (WRMSSE) - WRMSSE is evaluation metrics by the competition organizer, but in practice, I think WRMSSE is not always reasonable. Therefore I didn't make a custom loss function to avoid overfit this competition task itself. - Not to use complex and recursive features - Just use general features that can be applied to any practical tasks. - Recursive features lead to error accumulation - Not to use computation resources too much - Single model (LightGBM only, no stacking/blending) - Memory efficiency (model for each store and week)</p> <p>As I mentioned earlier, I was not aiming for the high ranking, so I was very surprised by this result (4th place). As you can see from the solution above, I'm not doing anything special. What was valuable to me was learning a lot about building a simple solution that can be widely used in practice. The ranking itself means absolutely nothing to me. (However, I feel very sorry for those of you who put in a great deal of effort to get to the top...)</p> <p>Either way, I have learned a lot from this competition and community. Thank you to all the kaggle community members.</p>
M5 Forecasting - Accuracy
An article about my solution for this competition.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am happy to share a few words about how I chose to handle the M5 competition by Kaggle.<br> It was fun to take part in this competition, and it was also fun to write about it :) ~!</p> <p>Also, I want to thank Kaggle for this competition :)<br> <a href="https://avivlevi-815.medium.com/my-solution-for-the-m5-kaggle-competition-top-14-8f004c92a51" target="_blank">https://avivlevi-815.medium.com/my-solution-for-the-m5-kaggle-competition-top-14-8f004c92a51</a></p>
M5 Forecasting - Accuracy
54th place solution [No lottery - just a baseline]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all I want to thank all Kaggle community: besides you can learn lots of new stuff here almost on every competition, community is very welcomed. </p> <p>In this post I want to share my approach, for me the result was not surprising.</p> <p>When the competition was firstly announced I was really motivated to invest lots of time into it in order to get a high place, but I faced some challenges at my work, so my competition-start was postponed and I started working under the competition about 1 month before the deadline when public LB had been already published.</p> <p>Based on my CV and lots of top-scoring public notebooks with custom multipliers I decided to build just a good baseline. </p> <p><strong>Validation</strong> In my opinion it was the most important part.</p> <p>I validate my score on 3 folds:</p> <ul> <li>(2016-03-27;2016-04-24]</li> <li>(2016-04-24;2016-05-22]</li> <li>(2015-05-17;2015-06-14]</li> </ul> <p>In my 3d validation fold I tried to choose aprox the same period of year - the same as final evaluation part.</p> <p><strong>Pre-processing</strong> There were some outliers and spikes in each individual ts so I did some smoothing: using MAD (median absolute deviation) of last 180 days of an item to find outliers and replace it with value 7 days ago - without this smoothing rolling mean features which included holidays (like Christmas) didt work in proper way.</p> <p><strong>Models</strong></p> <ul> <li><p>Standart LGBM with Tweedie Loss (1 model for all dates) <code>params = { 'boosting_type': 'gbdt', 'objective': 'tweedie', 'tweedie_variance_power': 1.1, 'metric': 'rmse', 'subsample': 0.75, 'subsample_freq': 1, 'learning_rate': 0.03, 'num_leaves': 2**11-1, 'min_data_in_leaf': 2**12-1, 'feature_fraction': 0.7, 'max_bin': 100, 'n_estimators': 1400, 'boost_from_average': False, 'verbose': -1, }</code></p></li> <li><p>Wavenet with GRU-part</p></li> </ul> <p>Final submission was weightened average with 0.8 and 0.2 coefs for models.</p> <p><strong>Target</strong> For my LGBM I didnt use <strong>Demand ** as a target - instead I used **<em>Difference</em></strong> of Demand and Demand of 28 days ago - for tree-based model it`s a nice trick to work wih trends. </p> <p><strong>Post-processing</strong> No custom coefficients :)</p> <p><strong>Features</strong> Pretty standart, most of them can find in public notebooks. In my CV worked pretty good ratios of rolling mean and median.</p> <p><code>data['ratio_mean_60'] = data['rolling_mean_t60'] / data['rolling_median_t60']</code></p> <p><strong>What didn`t work</strong> - custom loss with WRMSSE gradient - out of stock prediction (spent lots of time on it - didnt way right way) - mean encodings - hts-method with upper-level correction - and lots of other ideas from previous competitions and public ideas</p> <p>It was a really great journey working under this competition - learned lots of tricks as usual, somehow this competition reminded me a LANL competition with a huge shake-up and custom multipliers. </p> <p>Big congratulations to winners and big thanks to @kyakovlev, @girmdshinsei, @sibmike which notebooks and ideas i included in my solution.</p>
University of Liverpool - Ion Switching
12th solution: single model wavenet + lstm
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly thanks a lot to my teammates and the great community, to all people who keep sharing, we have all learned a lot during this competition. (I haven't even heard wavenet before this, neither have I imagined to solve time-series problem using CNN)</p> <p>Yesterday at 2am the private LB was revealed and after that I was too excited. I just slept for 2 hours this morning even though today I have to work, that's for our one month's hard work and finally I got my first gold medal.</p> <h2>Some thoughts</h2> <ul> <li>We were basically focusing on doing extra work on the famous 0.944 wavenet RFC kernel.</li> <li>We found groupKFold CV of group size 4000 was consistent to the public LB, but not perfectly consistent. Then we found that the class 10 count in test set was very informative, we only submit when test class 10 count is over 7000, ideally about 7020. By doing this our CV strategy became quite consistent, this allowed us to do all kinds of experiments in local securely. Different batch size or group size didn't help.</li> <li>Our final two submissions are this one (highest CV score) and a vote of three submissions (highest LB score), this submission is finally also the highest on private LB, while the vote dropped violently. This in return supports the fact that our CV strategy worked well.</li> <li>Flipping helped, but adding gaussian noise didn't, we didn't try augmentation at test time.</li> <li>We didn't spend time on HMM modeling. </li> <li>Our best LGB model was 0.941 on public LB, as someone mentions in a post that RFC leaks because of the different CV split strategy, we tried to replace it by LGB/RFC/LSTM using the same groupKFold, but no luck.</li> </ul> <h2>Preprocessing</h2> <ul> <li>We used drift cleaned and kalman filtered signal</li> <li>We dropped 200,000 rows from 3,640,000 to 3,840,000 , as to remove the spike noises. That made no difference on public LB score.</li> </ul> <h2>Features</h2> <p>We tried a lot features but few of them work, then we decided to stick with model structure tuning, hoping it to find useful features by itself. Still, except the RFC probabilities, some of features worked, even not bringing big boost. So we used these features in this model: - RFC Probabilities - signal ** 2 - lag range 3 and -11, -15: [-15, -11, -3, -2, -1, 1, 2, 3] All features are created after standard scaling signal.</p> <h2>Model structure (Pytorch)</h2> <p>`class Wave_Block(nn.Module):</p> <pre><code>def __init__(self, in_channels, out_channels, dilation_rates, kernel_size): super(Wave_Block, self).__init__() self.num_rates = dilation_rates self.convs = nn.ModuleList() self.filter_convs = nn.ModuleList() self.gate_convs = nn.ModuleList() self.convs.append(nn.Conv1d(in_channels, out_channels, kernel_size=1)) dilation_rates = [2 ** i for i in range(dilation_rates)] for dilation_rate in dilation_rates: self.filter_convs.append( nn.Conv1d(out_channels, out_channels, kernel_size=kernel_size, padding=int((dilation_rate*(kernel_size-1))/2), dilation=dilation_rate, padding_mode='replicate')) self.gate_convs.append( nn.Conv1d(out_channels, out_channels, kernel_size=kernel_size, padding=int((dilation_rate*(kernel_size-1))/2), dilation=dilation_rate, padding_mode='replicate')) self.convs.append(nn.Conv1d(out_channels, out_channels, kernel_size=1)) def forward(self, x): x = self.convs[0](x) res = x for i in range(self.num_rates): x = torch.tanh(self.filter_convs[i](x)) * torch.sigmoid(self.gate_convs[i](x)) x = self.convs[i + 1](x) res = res + x return res </code></pre> <p>class SEModule(nn.Module):</p> <pre><code>def __init__(self, in_channels, reduction=2): super(SEModule, self).__init__() self.conv = nn.Conv1d(in_channels, in_channels, kernel_size=1, padding=0) def forward(self, x): s = F.adaptive_avg_pool1d(x, 1) s = self.conv(s) x *= torch.sigmoid(s) return x </code></pre> <p>class Classifier(nn.Module):</p> <pre><code>def __init__(self, inch=8, kernel_size=3): super().__init__() dropout_rate = 0.1 self.conv1d_1 = nn.Conv1d(inch, 32, kernel_size=1, stride=1, dilation=1, padding=0, padding_mode='replicate') self.batch_norm_conv_1 = nn.BatchNorm1d(32) self.dropout_conv_1 = nn.Dropout(dropout_rate) self.conv1d_2 = nn.Conv1d(inch+16+32+64+128, 32, kernel_size=1, stride=1, dilation=1, padding=0, padding_mode='replicate') self.batch_norm_conv_2 = nn.BatchNorm1d(32) self.dropout_conv_2 = nn.Dropout(dropout_rate) self.wave_block1 = Wave_Block(32, 16, 12, kernel_size) self.wave_block2 = Wave_Block(inch+16, 32, 8, kernel_size) self.wave_block3 = Wave_Block(inch+16+32, 64, 4, kernel_size) self.wave_block4 = Wave_Block(inch+16+32+64, 128, 1, kernel_size) self.se_module1 = SEModule(16) self.se_module2 = SEModule(32) self.se_module3 = SEModule(64) self.se_module4 = SEModule(128) self.batch_norm_1 = nn.BatchNorm1d(16) self.batch_norm_2 = nn.BatchNorm1d(32) self.batch_norm_3 = nn.BatchNorm1d(64) self.batch_norm_4 = nn.BatchNorm1d(128) self.dropout_1 = nn.Dropout(dropout_rate) self.dropout_2 = nn.Dropout(dropout_rate) self.dropout_3 = nn.Dropout(dropout_rate) self.dropout_4 = nn.Dropout(dropout_rate) self.lstm = nn.LSTM(inch+16+32+64, 32, 1, batch_first=True, bidirectional=True) self.fc0 = nn.Linear(64, 32) self.fc1 = nn.Linear(32, 11) self.fc2 = nn.Linear(32, 11) def forward(self, x): x = x.permute(0, 2, 1) x0 = self.conv1d_1(x) x0 = F.relu(x0) x0 = self.batch_norm_conv_1(x0) x0 = self.dropout_conv_1(x0) x1 = self.wave_block1(x0) x1 = self.batch_norm_1(x1) x1 = self.dropout_1(x1) x1 = self.se_module1(x1) x2_base = torch.cat([x1, x], dim=1) x2 = self.wave_block2(x2_base) x2 = self.batch_norm_2(x2) x2 = self.dropout_2(x2) x2 = self.se_module2(x2) x3_base = torch.cat([x2_base, x2], dim=1) x3 = self.wave_block3(x3_base) x3 = self.batch_norm_3(x3) x3 = self.dropout_3(x3) x3 = self.se_module3(x3) x4_base = torch.cat([x3_base, x3], dim=1) x4 = self.wave_block4(x4_base) x4 = self.batch_norm_4(x4) x4 = self.dropout_4(x4) x4 = self.se_module4(x4) x5_base = torch.cat([x4_base, x4], dim=1) x5 = self.conv1d_2(x5_base) x5 = F.relu(x5) x5 = self.batch_norm_conv_2(x5) x5 = self.dropout_conv_2(x5) lstm_out, _ = self.lstm(x4_base.permute(0, 2, 1)) out1 = self.fc0(lstm_out) out1 = self.fc1(out1) out2 = x5.permute(0, 2, 1) out2 = self.fc2(out2) return out1 * out2 ` </code></pre> <ul> <li>Wave block is the same in public kernel. The concatenation after se module in each block gave us a boost, so does the lstm layer.</li> <li>we've also tried multi-task learning, using open_channels both as classes and as continuous values, but no luck with this model. Interestingly we have another LSTM model without wavenet, using this on that model gave us a huge boost of 0.002 on public LB.</li> <li>tried 1d pooling encoder decoder, didn't help</li> </ul> <h2>About model training</h2> <p>We were using CrossEntropyLoss as loss function, we did try other functions like focal loss and ohem loss, but no luck. Learning rate is 0.001, tried weight_decay but didn't help, batch size is 16. This part I believe helped a lot, as CE loss is not quite consistent with macro f1 score : - save model weight each time macro F1 score improves - after 10 epochs that macro F1 doesn't improve, and CE loss doesn't reduce, learning rate /= 2, and load lastly saved model (best macro F1 so far) - after 20 epochs that macro F1 doesn't improve, and CE loss doesn't reduce, early stopping - either macro f1 improves or CE loss reduces, epoch counter = 0 We believe this will give the model more chances to optimize F1 score, and we retrain a fold if its score is relatively lower than before, until it can give us satisfying score (usually 1 to 2 times). Then the training procedure will become like this: (red means F1 score improves, green means CE loss) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F456596%2F330ffd69a2072c802a3f85ee9cf66735%2FWX20200526-205151.png?generation=1590519148879009&amp;alt=media" alt=""></p> <p>Finally this model got 0.94365 on local CV, 0.94545 on public LB and 0.94513 on private LB.</p> <p><strong>Edit 1:</strong> I've seen that some people are interested in the LSTM model, so I can show you this simple multi-task learning model, even though there's no magic in it.</p> <p>`class Classifier(nn.Module):</p> <pre><code>def __init__(self, inch=8): super().__init__() self.conv_7 = nn.Conv1d(inch, 12, kernel_size=7, stride=1, padding=3, padding_mode='replicate') self.conv_5 = nn.Conv1d(inch, 12, kernel_size=5, stride=1, padding=2, padding_mode='replicate') self.conv_3 = nn.Conv1d(inch, 12, kernel_size=3, stride=1, padding=1, padding_mode='replicate') self.lstm = nn.LSTM(48, 45, 2, batch_first=True, bidirectional=True) self.linear_1 = nn.Linear(90, 45) self.linear_2 = nn.Linear(45, 22) self.out_cls = nn.Linear(22, 11) self.out_reg = nn.Linear(22, 1) for l in [self.linear_1, self.linear_2, self.out_cls, self.out_reg]: nn.init.kaiming_normal_(l.weight.data) def forward(self, x): x = x.permute(0, 2, 1) conv_out_7 = self.conv_7(x) conv_out_5 = self.conv_5(x) conv_out_3 = self.conv_3(x) concated_out = torch.cat([x, conv_out_3, conv_out_5, conv_out_7], 1) lstm_out, _ = self.lstm(concated_out.permute(0, 2, 1)) out_tmp_1 = F.relu(self.linear_1(lstm_out)) out_tmp_2 = F.relu(self.linear_2(out_tmp_1)) y_pred_cls = self.out_cls(out_tmp_2) y_pred_reg = self.out_reg(out_tmp_2) return y_pred_cls, y_pred_reg` </code></pre> <p>I use CrossEntropyLoss for the classification output and MSELoss for the regression output. This model can achieve 0.94487 on the public LB, but only 0.94388 on the private LB</p>
M5 Forecasting - Accuracy
Silver Medal Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition is a weird one. I have seen kernels that baffle me, such as picking random samples as validation, using magic multipliers, etc.. I said screw it and trained a LightGBM with these params:</p> <p><code> params = { 'objective': 'poisson', 'num_iterations': 700, 'num_leaves': 80, <br> 'min_data_in_leaf': 100, <br> 'metric': 'rmse', 'learning_rate': 0.1, 'feature_fraction': 0.65, 'bagging_fraction': 0.87, 'bagging_freq': 19, 'num_threads': 16, 'seed': 69 } </code></p> <p>I used all labelled data to train (without train-test split).</p> <p>That's it.</p> <p>Of course there are much better solutions and I can't wait to see them, but I think my solution shows that it is important to not overfit, and you definitely don't need to use any magic multipliers =).</p>
Plant Pathology 2020 - FGVC7
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>my architecture for this competition was <a href="https://github.com/mrT23/TResNet">TResNet</a>, that provides a very good speed-accuracy-batch_size tradeoff for GPUs. for example, i was able to train on input resolution 600x600 with batch size of 64 and training rate of 90 img/sec, which enabled me to experiment fast and efficient.</p> <ul> <li>as a loss function, i used <a href="https://arxiv.org/pdf/1910.07038.pdf">soft triplet loss</a> + cross entropy</li> <li>i used squish cropping with <a href="https://docs.fast.ai/vision.transform.html#_dihedral">8-rotation</a> augmentation, both for training and for inference.</li> <li>i used class balancing to give more weight to class1. without class-balancing, i could not learn that specific class properly, and the competition loss metric would punish me greatyl.</li> <li>i used pseudo-label to enrich a bit the ultra-small train set (added about 60% of the test set to train set)</li> <li>i used EMA to shorten the training even further and provide better results.</li> <li>my ensemble was composed of different input resolutions and different augmentations regimes.</li> </ul> <p>p.s. 1 "Do not trust the LB, trust your CV" ? everyone on every kaggle forums say that, but in hindsight there was a strong correlation for me between private and public scores. if i had chosen the top public score as submission,i would have ended at 2nd place :-)</p> <p>p.s. 2 test set was way too small, and just injected randomness and luck to the final positions</p>
Plant Pathology 2020 - FGVC7
very simple 46th solution + potential 17th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Solution,</strong></p> <p><strong>Model 1 (Tensorflow TPU)</strong>: <a href="https://www.kaggle.com/apthagowda/plant-pathology-2020-tenserflow-tpu">Kernel</a> Effinetnet B7 (imagenet) ,800x800, tf.image (Hflip, Vflip, Rot90), cosine schedule with warmup (warmup = 15, epochs = 40, lr = 0.0008), TTA 4 [ran the same notebook 3 times and ensambled the output] </p> <p><strong>Model 2 (Pytorch GPU)</strong>: <a href="https://www.kaggle.com/apthagowda/plant-pathology-2020-pytorch-gpu">Kernel</a> Effinetnet B5 (imagenet) ,520x520, albumentation ( Hflip, Vflip, Rotate 90), cosine schedule with warmup (warmup = 15, epochs = 35, lr = 0.001), TTA 4</p> <p><strong>Model 3 (Tensorflow TPU)</strong>: <a href="https://www.kaggle.com/apthagowda/plant-pathology-2020-tenserflow-tpu?scriptVersionId=33622663">Kernel</a> Effinetnet B7 (noisy student) ,800x800, tf.image (Hflip, Vflip, Rot90), cosine schedule with warmup (warmup = 15, epochs = 35, lr = 0.0008), No TTA</p> <p><strong>Ensemble</strong>: </p> <p>(Model1+(Model2+Model3)/2)/2, Public LB: 0.984, Private LB: 0.97541 (Model1+Model2+Model3)/3, Public LB: 0.983, Private LB: 0.97749 (I didnt selected this one)</p> <p>Thanks, everyone for the public kernels and discussions. Learnt a lot in this competition. Cheers</p>
Abstraction and Reasoning Challenge
21st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all thank you so much François Chollet and kaggle team for organizing this awesome competition. It has been really fun and creative competition for me. I have been getting score 1 for a month lol.</p> <p>My solution was similar to DSL methods we have been discussed. All the training code and test/evaluation code was processed by python methods which has the following interfaces.</p> <ul> <li><p>def converter(inp, objects, c) returns objects</p> <ul><li>inp: The original input from the dataset</li> <li>objects: processed input data representation with some metadata. eg) rectangle info</li> <li>c: Context object which is shared between training and testing. eg) Colors observed only in output. output shape information.</li></ul></li> <li><p>def merger(objects, c):</p> <ul><li>objects: processed input data</li> <li>c: the context object</li></ul></li> </ul> <p>I have more than 100 converter methods and 10 merger methods defined.</p> <p>For example one task might be solved by - [converter] identify_objects - Convert input array into rectangle pieces as objects - [converter] color_with_out_color - Color all the found objects with color which appears only in output - [merger] simple_merge - Merge all the object into one array.</p> <p>I have rotation, fill, flip, move, draw_line and many types of converters and or_merge, and_merge, select_biggest_object and more mergers.</p> <p>That's about it. I'm looking forward to seeing more creative ideas and solutions than mine :)</p> <p>One more note for the first month, CNN based image generation model. But it didn't work as we have so few training data.</p> <h2>Real code for identify objects</h2> <p>``` def identify_rec2(y, x, a, visited, c, rect):</p> <pre><code>if visited[y + 1][x + 1]: return visited[y + 1][x + 1] = True if a[y][x] == c.bg_color: return if y &lt; rect.y1: rect.y1 = y if y &gt; rect.y2: rect.y2 = y if x &lt; rect.x1: rect.x1 = x if x &gt; rect.x2: rect.x2 = x identify_rec2(y + 1, x + 1, a, visited, c, rect) identify_rec2(y + 1, x, a, visited, c, rect) identify_rec2(y, x + 1, a, visited, c, rect) identify_rec2(y - 1, x - 1, a, visited, c, rect) identify_rec2(y + 1, x - 1, a, visited, c, rect) identify_rec2(y - 1, x + 1, a, visited, c, rect) identify_rec2(y - 1, x, a, visited, c, rect) identify_rec2(y, x - 1, a, visited, c, rect) </code></pre> <p>def identify_greedy(inp, objects, c): if len(objects) != 1: raise SkipException <br> return objects a = objects[0].data height, width = a.shape if height &gt; 30 and width &gt; 30: raise SkipException <br> return objects visited = init_visited(height, width) objs = [] for y in range(height): for x in range(width): if visited[y + 1][x + 1]: continue if a[y][x] == c.bg_color: visited[y + 1][x + 1] = True continue if not visited[y + 1][x + 1]: rec = Rect(y,y, x, x) identify_rec2(y, x, a, visited, c, rec) obj = Object(a) obj.rectangle = (rec.y1, rec.y2, rec.x1, rec.x2) objs.append(obj) if len(objs) == 0: raise SkipException <br> return objects return objs </p> <p>```</p>
iWildCam 2020 - FGVC7
LB #4 Documentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our final solution is based on a simple InceptionV3 model pre-trained on Imagenet. This model was trained on the camera trap images whose animals were cropped out using the provided Megadetector bounding boxes as reference. Then, we choose the model that reached the best performance on a validation set, which was split from the training set taking into account the locations. The final prediction assigned for each image is obtained by using the majority voting among images within the same sequence. Due to the fact that the provided sequence data is noisy, we have used location and timestamp information to determine which images should belong to the same sequence.</p> <h3>Dataset splitting</h3> <p>Our validation set has approximately 11% of the original images of the training set. We have used a heuristic to try to keep both species and background diversity in the training and validation sets, described as follows. First, all the locations where the number of instances represents more than 1% of the total are added to the training set. This represents around 50% of the original training set. Then, for each class we randomly select locations to compose the validation set, one at once, until at least 5% of instances of the current class are added to the validation set. After that, the remaining locations available for that class are used to compose the new training set. Classes appearing on less than 6 locations or with less than 30 instances were ignored during the splitting process. The classes 'empty', 'start', 'end', 'unknown', and 'unidentifiable' were ignored as well. This heuristic does not produce splits following the original class distribution. However, we do not expect the test set to follow exactly the same distribution either. It also leads many classes not appearing in the validation set, especially those classes with fewer instances. Nonetheless, we have only removed classes that appear in the validation set but no in the training set. At the end, the training set was composed of 211 classes from 275 locations and the validation set was composed of 114 classes from 49 locations.</p> <h3>Megadetector bounding boxes</h3> <p>We have used the Megadetector results to crop images out before training. As the image resolutions vary along the dataset, we decided to use a square crop around the bounding box generated by Megadetector. The square size is the same as the largest dimension of the bounding box. Additionally, we have limited the square crop to have at least 450x450 of resolution and the maximum of the height of each image. In addition, the square crop will always keep the bounding box area centered unless the crop gets out the image. In those cases, we move the crop to get valid pixels and the animal is no more centered. For the images without a bounding box, we just crop a centered square with the image height. This procedure was applied on all images before training, including images from the test set.</p> <h3>Model training</h3> <p>We have used an InceptionV3 model pre-trained on Imagenet with input size 299x299, which was trained on our training subset for 48 epochs. The best accuracy model among all epochs was selected as the final model. Data augmentation was used during the training, using the following transformations applied to the input images: zoom, rotation, horizontal flip, shearing, horizontal and vertical shifts. The model was trained using the SGD optimizer with momentum of 0.9 and a simple learning rate schedule based on epochs, similar to the procedure employed in the paper “Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning”. The learning rate schedule was 1-5: 0.005, 6-10: 0.001, 11-20: 0.005, 21-27: 0.001, 28-35: 0.0005, 36-onwards: 0.0001.</p> <h3>Majority voting among images from the same sequence</h3> <p>To determine the final prediction for each image, we first check for empty images. If our model predicts “empty” and Megadetector does not provide bounding box, the class “empty” is assigned to the input image. Otherwise, majority vote is calculated considering all images from the same sequence, excluding the ones already marked as empty. Images are considered as belonging to the same sequence when they are from the same location and the difference between their timestamps is at most 30 minutes.</p> <h3>What we would like to have tested</h3> <p><strong>Big Transfer (BiT)</strong>. This new model seems promising for transfer learning, requiring fewer instances per class to reach high performance. Unfortunately, we were unaware of it until few days before the deadline of the competition. Consequently, we weren’t able to test it in time.</p> <p><strong>A model employing all images from the same sequence to identify animal species</strong>. Depending on the position of the animal on the scene, it may be very difficult even for experts to identify the species. However, if the same individual can be identified with high confidence on another image of the same capture event, this classification can be used for the whole event. An approach that uses this temporal information is described in the paper “Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection”, but we didn’t try it.</p> <p><strong>MegadetectorV4 bounding boxes</strong>. We would like to have tested the new version of the Megadetector. We think this new version could have generated bounding boxes more accurately. It also includes a new vehicle class, which could have improved our results for this class.</p>
Herbarium 2020 - FGVC7
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all special thanks to my teammate Tal :)</p> <h2>Backbone:</h2> <p>As a backbone, we used TResNet architecture (High Performance GPU-Dedicated Architecture) which allowed us to do fast training with large batches, while maintaining top scores along all resolutions (more details in our <a href="https://arxiv.org/pdf/2003.13630.pdf">paper</a> and code <a href="https://github.com/mrT23/TResNet">here</a>). Models used in this competition: TResNet-M, TResNet-L.</p> <h2>BottleNeck Head:</h2> <p>We avoided having a massive classification layer: 2048x32094 (65.72M params), by using a bottleneck head that reduces the embedding feature vector from 2048 dims to 512 dims. total parameters= 2048x512 + 512x32094 (17.48M params). This modification not only decreased the size of the network, but allowed us to use samples in larger batch sizes (Faster Training and helps the Soft-Triplet loss). <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1578028%2Fd861d950b9f015672631c7b59815e5a5%2FBasicBlcok_Tresnet%20(1" alt="">%20copy.png?generation=1590586933112476&amp;alt=media)</p> <h2>Loss function</h2> <p>We used Cross-Entropy loss with label smoothing 0.2, and in addition we added Soft-Triplet Loss with Soft Margin which emphasizes hard examples by focusing of the most difficult positives and negatives samples in the batch, for more details check out our ReID <a href="https://arxiv.org/pdf/1910.07038.pdf">paper</a> Section 3.1. Online-Hard Negative mining works best when training with large batch size (more negative examples); hence, we leveraged the high memory utilization of our Network and the bottleneck head. for example: with TResNet-M @ 448x448, we could reach maximum batch size of 128 on a single V100 16GB, significantly higher than other common networks like EfficientNet and ResNext.</p> <h2>Class Balancing</h2> <p>Herbarium 2020 dataset suffers from prominent class im-balance, therefore we used a simple loss weighting, multiplying by the inverse of class frequency. Note: We used class balancing only in the last 15~ epochs of the training, to enable the network to learn the basic features first. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1578028%2F332b1711fa4eefa0b2de9984d491742d%2FScreen%20Shot%202020-05-27%20at%2016.46.38.png?generation=1590587199078597&amp;alt=media" alt=""></p> <h2>Augmentations</h2> <p>• Squish instead of Crop • FlipLR • Cutout: 0.5</p> Special augmentation scheme (+~0.5% on public LB): <ul> <li>Random Zoom-in (p=0.4, zoom_factor= random between 1-1.15)</li> <li>Reduced affinity transformations: p_shear: 0.1, p_rotate=0.2, rotate_degrees=10</li> <li>Decolorize: p_decolorize: 0.4</li> <li>Rectangular resizing: small gains <h3>TTA</h3></li> </ul> <p>Averaging horizontal flip predictions</p> <h2>Our best single model:</h2> <p>Model: TResNet-L</p> <p><strong>Stage 1:</strong> 100 epochs at 448x448, CE+SoftTriplet (class-balancing starting from epoch 85) Public LB score (without post-processing): 0.84089 Public LB score (with post-processing P1+P2): 0.85093</p> <p><strong>Stage 2 (high resolution fine-tuning):</strong> 10 epoch fine-tune + class balancing on 544x416 Public LB score (without post-processing): 0.84727 Public LB score (with post-processing P1+P2): 0.85604</p> <h2>Post-Process (+~0.9% on the pubic LB)</h2> <p>The origanizers wrote in the description that: "Each category has at least 1 instance in both the training and test datasets."</p> <p>Based on this prior knowledge we used two post-processing algorithms that tries to fill some of the empty classes.</p> <p><strong>P1: Switching classes based on low top-1 confidence (~0.9% on the public LB)</strong> the idea is basically whenever the network is not confident about its predictions, and if one of the empty classes is close to top1 prediction, then switch predictions: 1) Go through all the top 2 predictions in the test set and if all these criteria are met switch between top1 and top2: • top2 &gt; top1 * m (m=0.3) • top2 class never appeared in the top1 predictions • top1 class appeared at least once as top2 prediction for another sample image in the test set. 2) Repeat the same switching algorithm above for top 3, 4 and 5 (performed sequentially, first perform all of top2 switches and then top3 switches and so on).</p> <p><strong>P2: Filling empty classes (~+0.1-0.2% on the public LB)</strong> 1) Go through the predictions after the first post-process, and for all classes that don't appear in the top1 predictions and find the image sample with the highest prediction from all of the top5 predictions of the test set.</p> <h2>Pre-trained model</h2> <p>We used pre-trains from ImageNet and iNaturalist 2018. we got almost similar scores, with iNat18 having slightly better scores on the public LB. </p> <h2>Ensemble (+0.8% on public LB)</h2> <p>We used an ensemble of 9 models trained from different pretrains (imagenet and iNaturalist2018) and different resolutions, final resolutions: 544x416 mostly and some 620x476. Public LB score (with post-processing P1+P2): 0.86310</p> <h2>What didn't work for us (relevant to this competition only):</h2> <ul> <li>Fixed cropping by 10-20%</li> <li>Pseudo-labeling</li> <li>Mixup</li> <li>Focal loss</li> </ul>
COVID19 Local US-CA Forecasting (Week 1)
Top 4%-6th place solution Notebook
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Local US-CA Forecasting (Week 1) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi All, I am sharing this notebook in which I have been able to achieve (6th place/top 4%) after putting so much effort. Please check &amp; upvote if you find this notebook useful. Also please suggest your observations for further improvement. <a href="https://www.kaggle.com/mdmahmudferdous/covid-19-us-ca-forecasting-top-4-notebook-7th">https://www.kaggle.com/mdmahmudferdous/covid-19-us-ca-forecasting-top-4-notebook-7th</a> Thanks. </p>
Plant Pathology 2020 - FGVC7
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone. I learned a lot and had fun. Thank you so much to Kaggle and the competition organizers for a really interesting and relevant problem. I attempted this Kaggle competition after titanic and mnist (though I read a lot of code from Flowers TPU).</p> <p>I think my solution is simple, much of the interesting stuff I tried didn't work sadly.</p> <p>Also would like to mention some Kagglers for helping me, my success is their success: <a href="https://www.kaggle.com/anyexiezouqu" target="_blank">@anyexiezouqu</a> for efficientnet b7 and incepresnetv2 ensemble at 800 image size. <a href="https://www.kaggle.com/ateplyuk" target="_blank">@ateplyuk</a> for efficientnet b7 on TPU. <a href="https://www.kaggle.com/mgornergoogle" target="_blank">@mgornergoogle</a> for multiple TPU tutorials and TPU troubleshooting posts I found during debugging.</p> <p>Submission 1: private score .978, public score .982 (placed 4th)<br> Submission 2: private score .980, public score .978 (hypothetically ~20th)</p> <p>I'll focus on the first submission here.<br> model1 = b7<br> model2 = (b6+incepresnetv2)/2<br> Final model = (model1 + model2) / 2</p> <h2><strong>What worked:</strong></h2> <ul> <li>Increasing the image size (I used 896 or 800 before memory limitations). If you use 896 you need to drop the batch size.</li> <li>Using noisystudent weights instead of imagenet</li> <li>drop val to train on more data (fairly standard practice)</li> <li>random flip LR augmentation</li> <li>random flip up/down augmentation</li> </ul> <h2><strong>No change or slight improvement:</strong></h2> <ul> <li>Use b6 instead of b7. With (b7 + incepresnetv2)/2 I got .975 LB, and also many LB scores lower than that. With (b6 + incepresnetv2)/2 I got .975 LB on the first try. So needs more data but maybe it improves. Though CV was not representative of LB for me, It was clear from model's confidence I was still overfitting. Also others used b5 and b6 successfully on LB so these 3 factors contributed to me bumping down to b6. I also wanted to try a b5 but not enough time.</li> </ul> <h2><strong>No change or slightly worse:</strong></h2> <ul> <li>Random cropping size 896 down to size 800. Not sure why this didn't improve the score, it's possible the pretrained networks are already quite translationally invariant.</li> <li>random rotation</li> <li>TTA with same, flip lr, flip down, flip lr flip down.</li> <li>Saturation (I might have done too much, but it didn't help)</li> </ul> <h2><strong>Implementation did not work:</strong></h2> <ul> <li>Gaussian blur on images (too slow), though I suspect it might have been useful if done carefully because of blurry/out of focus images.</li> <li>L2 regularization (seemed to make CV better but LB worse). After discovering this <a href="https://www.hds.utc.fr/~fdavoine/dokuwiki/_media/en/pr2020.pdf" target="_blank">I found a paper</a> saying, for transfer learning, to make weights approach imagenet initialization versus 1 (L2) or 0 (L1). Didn't try this yet.</li> </ul> <p>I think the theme of this competition for me was trying many techniques I thought would help, a few did but many didn't. So my model was quite simple in the end.</p> <p>Thanks, hope to see you all on Kaggle in future.</p>
M5 Forecasting - Accuracy
[64th Place] A simple low computation model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey Kagglers!</p> <p>First off, I want to thank all the community sharing their findings, this is a great platform to learn because of the people! </p> <p>Secondly, I was shocked to have gotten such a high placement; I know I don't have even close to the competencies of so much of the community. </p> <p>I wanted to share one part of model, since I haven't seen or read anyone doing something similar, however it made it easier/quicker to compute. For predicting 28 days into the future, I've seen different options such as recursion: predicting day 1, then using that prediction to create the features for day 2 etc or having 28 different models each predicting x days ahead. I used a simpler version which just repeated the last 28 days of the training data for the prediction period and built the features off that. </p> <p>More specifically, I used 3 years of data to make a training set with 2 years of data with up to 365 day rolling features. So my training set had data from d1183 to d1913 (when the item had enough sales data). Then to predict d1914 to d1941, I took the amount of sales from d1886 to d1913 and copied it to for the sales amount for d1914 to d1941. Those copied sales amounts were used to create all the rolling and shifting features of d1914 to d1941. My entire model only had 3 training sessions; one for hobbies, one for household and one for foods. </p> <p>With my own WRMSSE calculation, I tried both a 28 day repeat and also repeating the last 7 days 4 times, but got better results repeating the 28 days.</p> <p>I was quite surprised a simple method would get a good result. </p> <p>Thanks again to everyone out there</p>
M5 Forecasting - Accuracy
175 position in M5 Forecasting - Accuracy
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all winners and participants!!. This is my first medal in Kaggle platform. I felt very motivated after getting this. I would like to share my approach and why I chose to do like that. Here I chose Deep State as my forecasting Algorithm/Estimator with GluonTS as my framework. The reasons to choose this model is as follows: 1. DeepState is a Statespace form of Exponential Smoothing Algorithm using Deep Neural Network. Here the idea of forecasting is leveraged to maximum with Hidden Markov Model approach i.e., previous day’s forecast will determine today’s forecast. 2. DeepState uses a joint modelling approach, where it uses the covariates(regressors) to find the state space parameters. When I say state space parameters, they are latent variables like trend, level, seasonality. 3. In basic way, the current latent state of latent variable is a transformation on previous latent state along with transmission coefficients and emission coefficients. So, using the covariates, the neural network gets trained conjointly to get the latent variables and with this latent variables we forecast the future. 4. While forecasting/prediction, the network is unrolled to get the latent variables back and then predicted using them. Hence with the above mentioned uses of DeepState where it can train jointly, we can learn features from other timeseries and also Gluonts framework has functionality to train on many timeseries at once can reduce a lot of training time as well. Also, as the sales were hierarchical and also there was some dependency of one product in one department that might affect the sale of another product in the same department. Here joint modelling approach can benefit the forecasts. <strong>Data Preparation/Training/Evaluation:</strong> The Data Provided by M5 was in the form of (N,T) notation, where N denotes the no of timeseries and T denotes length of timeseries. GluonTS builtin notation follows the same with that of provided by M5 where I leveraged the builtin Data Structure of gluon for modeling and the preparation time was less than 2 minutes. Coming to the Training, we need to specify the prediction length, frequency of timeseries, should we add trend or not for the Deepstate Estimator and the neural network parameters like batch_size, epoch where batch size is the no of timeseries that goes into training at once and log-loss is validated on them whereas epoch tells us the point where the network parameters should be updated. Training the 30490 timeseries took about 20 mins leveraging the benefits of GluonTS. While evaluation, as mentioned the network is unrolled to get the latent variables and we had 30490 timeseries, it took around 1 hours to forecast. So on total, the complete runtime was about 1.5 hours.</p>
M5 Forecasting - Accuracy
From Public LB #3762 --> Private LB #80... How I got there
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>When I started, I did not harbour much hopes of getting a good placing Reasons: 1) Didn't fare well in past time series competitions (eg ASHRAE &amp; Liverpool ion switching) 2) Many public kernels suggested "best solution" is LGBM + recursive prediction</p> <p>What I did before 1 June: 1) Trying to replicate results of above kernels w/o much success (eg Three shades of Dark Magic) 2) Did not understand why Public LB score changed drastically from 1 submission to another</p> <p>What happened after 1 June (evaluation set released): a) uses fast.ai Tabular model, each run only 1-2 hours (no more LGBM) loss fn = Poisson Loss; Also tried MAE &amp; MSE but Poisson still best</p> <p>b) no more recursive prediction (need too much run time; prediction errors can grow with time)</p> <p>c) Analysis --&gt; thanks to Vopani kernel ( <a href="https://www.kaggle.com/rohanrao/m5-anatomy-of-the-public-lb">https://www.kaggle.com/rohanrao/m5-anatomy-of-the-public-lb</a> ), I was able to visualize where my model excels &amp; where it is poor --&gt; most errors from lowest level (due to spike demands followed by long periods of '0') I just optimize for mean of these fluctuating sales, ensure not too much overfitting --&gt; my model forecasts at higher levels are quite good </p> <p>d) Used different starting days for different runs; start_train=700, 950, 1200, 1350, 1500 Averaging over these runs help to improve score by about 0.1</p> <p>e) Tried Magic Number (multiplier) but finds best score obtained w/o magic !</p> <p>f) Feature Engr : Lag, roll, window features ideas from Konstantin, but with diff lags / window size selected (since I don't need recursive predictions)</p> <p>Overall, I think fast.ai Tabular model helps me avoid overfitting at lowest level. Averaging over several runs (start day not too far from forecast dates optimal ~1-3 years earlier) helps alot !</p> <p>With this competition, I earned my 2nd Silver &amp; becomes Expert in Competitions ;-)</p> <p>For those interested, my kernel: <a href="https://www.kaggle.com/sidneyng/k-m5-final-public/output?scriptVersionId=37866057">https://www.kaggle.com/sidneyng/k-m5-final-public/output?scriptVersionId=37866057</a></p>
Jigsaw Multilingual Toxic Comment Classification
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to Kaggle and Jigsaw to host this competition. I am very happy to achieve my second solo gold medal within a week :)</p> <p>My main goal when starting this competition was to learn how to use TPUs on Kaggle. I started with Pytorch, switched to Tensorflow, and then switched back to Pytorch. I think overall the usability is still very shaky, and 3 hour limit is also limiting things quite a bit. There is constant progress on it though, and I hope that they will be better usable in the future. I made a separate <a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/159723">topic</a> before.</p> <p>The whole test set was visible in this competition, so subbing was very easy and blending many models was very important. Unfortunately, I could not select my best sub because it included a blend with a <a href="https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta">kernel</a> where no training script was available. I am happy I followed my heart to not select it even though I would have ranked a few spots higher with it. I really would encourage people to stop posting inference only kernels, because logically people will use them if they score well, but the learning potential is quite limited imho.</p> <h3>Models</h3> <p>There is not much differences in the models I fit to those that have been posted throughout the competition. I could not get any better results compared to using XLM-Roberta-Large. I also tried uni-language models without any improvements over XLM-Roberta.</p> <p>I also do not believe that any advanced sampling technique, label smoothing, augmentation, or other stuff floating around was giving any improvements. </p> <p>The only real important thing was to translate the training data to the languages at hand. This is in line with the experiments reported in the original <a href="https://arxiv.org/pdf/1911.02116.pdf">XLM-Roberta paper</a>.</p> <p>I blended models by doing rank-averaged blending.</p> <h3>Data</h3> <p>I only used provided data and translations of it. I think the most untapped potential was to use talk pages from Wikipedia dumps to further train the models. I did not have time to explore this, but I hope someone else did and can report on it.</p> <p>I actually think one of the main issues of the provided English data was that it included barely any Bot comments, but the test data contained quite many of them, specifically Turkish which is also why I think the score is better there than for others.</p> <h3>Post-processing</h3> <p>After blending, post-processing was quite important for me to reach high scores. As the metric is a global AUC over six different languages, it became quite quickly obvious to me that there should be room to optimize the global score by shifting predictions of individual languages up or down. </p> <p>I tested this on CV with the three languages and managed to get very good boosts there. I further tested how a simulated public and private dataset would behave in terms of optimization, assuming a random split between public and private. So what I did was to repeatedly select random 30% of CV, optimize the thresholds for the 30% and apply them to the remaining 70% and check how far off they would be from the optimal thresholds of the 70%. Overall, I saw that there was not much difference, which is why I was quite confident that I can optimize the thresholds on public LB and they should work well for private LB. Optimizing on CV alone does not work because CV only has 3 languages. This is an example plot of such an analysis:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F14ef07f17e3b1eda7b98ebac1f6cb46b%2Fdownload.png?generation=1592897946838419&amp;alt=media" alt=""></p> <p>The x-axis is the scaling factor for the language at hand, the y-axis is the improvement of the score over no scaling. The blue line is the improvement of the 30% sample and the orange of the 70% sample. Note that the improvement is not 0 for factor 1.0 in this case as another language has already been optimized.</p> <p>Additionally, I optimized a range of values for which I optimize the scaling factors. I found that not scaling the top predictions improved scores, so I only scaled predictions between 0.25 and 0.875 (rank average values). You can get some further minor improvements by averaging the predictions for similar texts.</p> <p>My whole PP routine boosted scores by roughly 0.004, so quite a lot.</p>
Tweet Sentiment Extraction
Quick 1st place solution overview before the night
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Update :</h3> <ul> <li>Training notebook for our 2nd level models : <a href="https://www.kaggle.com/theoviel/character-level-model-magic/">https://www.kaggle.com/theoviel/character-level-model-magic/</a></li> <li>More detailed write-up : <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159477">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159477</a></li> </ul> <p>This is a short thread to rapidly present a solution, we will work on a more detailed one tomorrow. </p> <p>Huge thanks to my teammates <a href="/cl2ev1">@cl2ev1</a>, <a href="/aruchomu">@aruchomu</a> and <a href="/wochidadonggua">@wochidadonggua</a> for the great work, we definitely wouldn't have gone this far if we were not together. </p> <p>Our whole solution can be illustrated in the following pipeline.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2062758%2F9848f1052e0108f6257fa56f1233a9d1%2Fpipe.png?generation=1592353191974814&amp;alt=media" alt=""></p> <p>The idea is to use transformers to extract token level start and end probabilities. Using the offsets, we can retrieved the processed probabilities for the input text.</p> <p>We then feed these probabilities to a character level model. </p> <p>The tricky part is to concatenate everything correctly, such as explained <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159254">here</a> </p> <p>And then... TADAM ! No post-processing. Just modeling. </p> <p>We selected two models that scored public LB 0.734 / CV 0.736+. They use 4 different character level model each, with a big variety of transformers. Final private scores are 0.735 and 0.736 :)</p> <p>Thanks for reading ! </p>
Jigsaw Multilingual Toxic Comment Classification
11th place solution(Hui Qin's)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations all winners. Thanks Kaggle supply this funny competition. Thanks my teammates @Yang Zhang @MaChaogong @Hikkiiiiiiiii @Morphy</p> <p>It is my first to solve the Multi-Lingual problem. Thanks for those great starter kernels: <a href="https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-two-stage-training">https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-two-stage-training</a> <a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta</a> I follow these codes at the very beginning and quickly got a lb 0.9415 single xlmr large model. </p> <p>After working 3 months for this problem, I got something useful tricks. </p> <p>1) What is the best training policy ? There are three training policies : one stage training,two stage training and three stage training. a) One stage training means that we always train the same data in training . <br> This policy always get the lower scores. But we can get some diversity. <br> b) Two stage training means that we will have two different data for training. e.g. <br> We firstly train on "Toxic Comment Classification" dataset +"Unintended Bias in Toxicity" dataset . After that we train on 8k validation data. <br> This policy always get the higher scores. <br> c) Three stage training means that we will have three different data for training. e.g. We firstly train on "Toxic Comment Classification" dataset +"Unintended Bias in Toxicity" dataset . Secondly we train on 200k open subtitle data. <br> Finally we train on 8k validation data. This policy always get the similar scores with Two stage training. But we get the higher private scores. For example, my xlmr large model got lb 0.9407, private score 0.9414. my another xlmr large model got lb 0.9420, private score 0.9424. </p> <p>Therefore , Three stage training is the best training policy. I don't know why. <br></p> <p>2) How to use data? </p> <p>At the very beginning , my teammate found that when we train on "Toxic Comment Classification" dataset +"Unintended Bias in Toxicity" dataset ,the valid auc are always lower than training on "Toxic Comment Classification" dataset only. To get a higher valid auc, we can not combine many different data to train. <br> Before 9 days ago, I found that if we use our best single model to predict the training data's target and use the predicting target as the label instead of the 0/1 label.Then we can get a higher valid auc score. e.g. We use the our xlmr large model(lb 0.9411) to predict "Toxic Comment Classification" dataset and get the predicted target of them. Something like these : id toxic preds 1 0 0.341 2 1 0.813 3 0 0.211</p> <p>Then we use the predicted probabilities as label target to train. Using this trick, we always get a 0.003 boost in valid auc score. This trick can use on other datasets which has 0/1 labels. After using this trick on both "Toxic Comment Classification" dataset and "Unintended Bias in Toxicity" dataset , we can use them to train our model. The more data, the higher valid auc score. </p> <p>Based on it, I trained a xlmr large model with 790k training data("Toxic Comment Classification" dataset and "Unintended Bias in Toxicity" dataset and open subtitle dataset) . it got a lb 0.9464 score and a 0.9445 private score. </p> <p>That's all . Hope these two tricks can help you. </p>
Tweet Sentiment Extraction
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we want to thank kaggle for hosting the competition. Thanks for my teamates <a href="/murphy89">@murphy89</a> <a href="/wuyhbb">@wuyhbb</a> for their hard work. Thank <a href="/abhishek">@abhishek</a> for providing a very solid baseline. This is my first NLP gold medal and I am extremely happy! </p> <h1>TLDR</h1> <p>Use model to predict the <strong>ground truth</strong> start/end indices, use post processing to capture <strong>noise</strong>. </p> <h1>Models</h1> <p>Our models are RoBERTa-base with customized headers. We have two model structures: 1. Model-1 - Concat([last 2 hidden_layers from BERT]) -&gt; Conv1D -&gt; Linear - End position depends on start (taken from <a href="https://www.kaggle.com/c/tensorflow2-question-answering/discussion/127545">here</a>), which looks like, ```python</p> <h1>x_head, x_tail are results after Conv1D</h1> <p>logit_start = linear(x_start) logit_end = linear(torch.cat[x_start, x_end], dim=1) ``` 2. Model-2 - Concat([last 3 hidden_layers from BERT]) -&gt; Conv1D -&gt; Linear - Auxiliary tasks for whether the prediction is whole text (classification) and whether the token is in selected text (segmentation). </p> <h1>Preprocessing &amp; Tokenization</h1> <p>We used two methods to preprocess the text and selected_text: * Method-1</p> <p>1) Clean text &amp; selected_text: <code>" ".join(text.split())</code>. </p> <p>2) Split punctuations if sentiment is not "neutral": '...' ==&gt; '. . .' (50% if sentiment != 'neutral'). </p> <p>3) Correct wrong offsets for "�" (this is not needed if you are using tokenizers 0.7.0). </p> <p>4) A token is considered as a target if <code>len(selected_text) &amp;gt; 4</code> and <code>char_target_pct of the token &amp;gt;= 0.5</code>. </p> <p>5) Max_len = 192.</p> <ul> <li>Method-2</li> </ul> <p>1) Clean label based on postprocessing method (will discuss later). </p> <p>2) Use raw text &amp; selected_text, tokenize at word level, use "Ġ" for all spaces. </p> <p>3) Dynamics padding. </p> <p>We used two patterns to build the training samples for both methods: [sentiment][text] and [sentiment][raw sentiment][text], where raw sentiment comes from the original full dataset. We didn't convert text to lower case during preprocessing, instead, we use it as an augmentation. Method-1 is used for Model-1, while Method-2 is used for Model-2. </p> <h1>Training related</h1> <ul> <li>Batch size: 32</li> <li>Optimizer AdamW, weight decay 0.01</li> <li>Scheduler: cosine scheduler with warmup</li> <li>Loss: CrossEntropy with label smoothing</li> </ul> <p>We finetuned Model-1 (head only) for another 3 epoches using linear scheduler with warmup (lr=1e-5). Model-2 is trained with Fast Gradient Method (FGM). Our best single model on public LB: [sentiment][raw sentiment][text] + Method-1 + Model-1, trained on a 10-fold setup (CV 0.7177, LB 0.719).</p> <h1>Postprocessing</h1> <p>There are two parts of postprocessing: set "neutral" predictions as texts (all CV scores are after this treatment) and process noise. The first part is straight forward, so we will focus on the second part here. Actually many people noticed there are lots of mystery selected_texts during the competition. For example: <code> [text] : [ hey mia! totally adore your music. when will your cd be out?] [selected_text] : [y adore] </code> Take Method-1 as an example, it throws [y] away, leaving [ adore] in the target span. Therefore, the trained model is good at predicting [ adore] as the results (jaccard(decode_target, pred) = 1), but due to the noise [y], the final jaccard is only 0.5. In fact, for a model with validation jaccard score around 0.717, the jaccard score between prediction and the decode target is around 0.724. Hence if we can somehow add back the noise, we can boost model performance. For the above expample, one may naturelly think the original label is [ adore ], therefore the given label can be acheived by shifting the span to the left by one position. Later, we realized how these shifts come from, if we compare the text and the clean text,</p> <p><code> [text] : [ hey mia! totally adore your music. when will your cd be out?] [clean text] : [hey mia! totally adore your music. when will your cd be out?] </code> You will realize there are 3 extra spaces in the text, one leading and two in between. Ignore the leading space and the one after selected_text span, there is one extra space, which is exactly the number of shift position. <code> [text_no_leading]: [hey mia! totally adore your music. when will your cd be out?] [selected text] : [y adore] [clean text] : [hey mia! totally adore your music. when will your cd be out?] [selected text] : [ adore ] </code> We think the stroy behind is that the label provider only provided the indices of targets to Kaggle ( <a href="/philculliton">@philculliton</a> mentioned it <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/140847#802761">here</a>). There might be some miss communication between Kaggle and the service provider regarding which texts they are using (Kaggle uses raw text, labeler uses clean text). </p> <p>Based on this finding, we designed rules to post-process model predictions and clean labels. As a results, both models could predict the <strong>ground truth</strong> start index accurately, while the starting noise is handled by post processing. The post processing rules are, ```python def _post_shift_new(text, pred): """ text: raw text pred: prediction based on raw text, no leading space """ clean_text = " ".join(text.split()) start_clean = clean_text.find(" ".join(pred.split()))</p> <pre><code>start = text.find(pred) end = start + len(pred) extra_space = start - start_clean if start&amp;gt;extra_space and extra_space&amp;gt;0: if extra_space==1: if text[start-1] in [',','.','?','!'] and text[start-2]!=' ': start -= 1 elif extra_space==2: start -= extra_space if text[end-1] in [',','.','!','?','*']: end -= 1 else: end -= (extra_space-2) start -= extra_space pred = text[start:end] # handle single quotation mark if pred.count("'") == 1: if pred[0] == "'": if text.find(pred) + len(pred) &amp;lt; len(text) and text[text.find(pred) + len(pred)] == "'": pred += "'" else: if text.find(pred) - 1 &amp;gt;= 0 and text[text.find(pred) - 1] == "'": pred = "'" + pred return pred </code></pre> <p>``` We applied post processing to all the non-neutral samples, for all those modified samples, it has a wining rate (jac_post &gt; jac_no_post) of ~60% on the validation set. Our best single achieves CV 0.7264, LB 0.728 after post processing. So processing the noise brings ~0.009 boost in both CV and LB. The winning rate and aligned boost in both CV &amp; LB make us feel comfortable to apply it. </p> <h1>Ensemble</h1> <p>To ensemble results from different models, we convert the token probability to char-level probability, ensemble the probabilities from different models and finally find the start/end index. By ensembling 4 models (2 patterns X 2 models) and applying post-processing, we achieved CV 0.7299, LB 0.723, which scores 0.730 on private leaderboard. </p> <h1>Other stuff</h1> <ul> <li>Actually we didn't select our best solution, which is an ensemble of 3 RoBERTa-base and a RoBERTa-large, it has a CV 0.730669, LB 0.721 and private LB 0.731. </li> <li>I checked all our submissions with private leaderboard score, CV aligns perfectly with private leaderboard. Never forget the golden rule: <strong>Trust your CV</strong> </li> </ul>
Tweet Sentiment Extraction
22nd place solution with codes
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Congrats to all winners, and thanks to my teammates</strong>! . And silver medal again😂 .</p> <h2>Our solution could be summarized as:</h2> <ol> <li>Post processing, as <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159260">Firas' kernel</a> showed, we applied this postprocessing which helps us to reverse engineering some samples. This pp increases our single model from 0.716 to 0.724. Thanks for Firas finding the idea of postprocessing!</li> <li>Ensemble, we trained 3 sets of roberta-base models with different random seeds and one albert-large model and one xlnet-base model. We calculate token level offsets and word level offsets. After we get logits, we transform token level logits into word level logits (using the maximum probability of token inside word as word probability), then we average word level logits of all models to get word level bounding boxes, then we view roberta-base as base model, and further select start and end tokens from start and end words.</li> <li><p>Pseudo labeling, by using original dataset for pseudo labeling, we trained 2 rounds of pseudo labels for roberta-base and albert-large. CV of roberta-base increases from 0.722 to 0.724, and cv of albert-large increases from 0.720 to 0.722, as for xlnet-base, since JianJian's server is down😂, we don't have time to train that.</p> <h2>Things we thought about but haven't tried:</h2></li> <li><p>CNN head for model.</p></li> <li>Char level model. <h2>Our kernel:</h2></li> </ol> <p><a href="https://www.kaggle.com/jionie/preprocessing-new-pipeline-pseudo-model-ensemble">https://www.kaggle.com/jionie/preprocessing-new-pipeline-pseudo-model-ensemble</a></p> <h2>Our codes:</h2> <p><a href="https://github.com/jionie/Tweet">https://github.com/jionie/Tweet</a></p>
Tweet Sentiment Extraction
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to my great teammate @wowfattie for another interesting competition journey. And congratulations to all the winners!</p> <p>In this post, I will briefly describe our path in this competition and then go straight to our final model description.</p> <h2>Motivation to join</h2> <p>I've joined this competition relatively late (about a month before the deadline). Probably, it was even an advantage, keeping in mind all the data problems in this competition.</p> <p>My motivation to join this competition was to make myself familiar with the recent NLP models and approaches. The last time I was working with the text data was pre-Transformers era. <strong>Spoiler: I haven't learnt a lot about recent NLP trends</strong></p> <p>I had read the basic theory of Tranformers and Bert-like models. Then it was very helpful to go through the code and Youtube videos shared by @abhishek. Thanks again! Basically I've built all my models based on this code.</p> <h2>Data property (a.k.a. Magic)</h2> <p>After some tweaking of the initial Roberta model, I've managed to get 0.714 on the Public Leaderboard. Then I've tried to delete the preprocessing that was doing <code>" ".join(text.split())</code> and unexpectedly Public LB score jumped to 0.716.</p> <p>It was at a similar time moment when the magic post has been shared. So, I decided to dig into what is hidden in the space distribution. And it occurred that the majority of the labels noise could be explained by the extra spaces in the text.</p> <p>First, thanks to @maxjon for creating a <a href="https://www.kaggle.com/maxjon/complete-tweet-sentiment-extraction-data">correspondence dataset</a> between the original and Kaggle datasets. I'm still not sure about the exact rules this data property follows, but here are our most frequent findings: - Extra spaces could come either from the initial tweet text or from the deleted @mentions from the original dataset - Start of the actual label is shifted left by the number of extra spaces - End of the actual label is also shifted left by the number of extra spaces, but then shifted right by 2 characters</p> <p>Seems like the initial labeling has been done with <code>" ".join(text.split())</code> preprocessing, and that caused the labels shift in the actual texts. Maybe someone else could shed more light on this. For example, (spaces are substituted with |):</p> <p>```</p> <h1>Original dataset text:</h1> <p>"@blueskiesxj||i|like|yours|too|||i|enjoy|your|photography.|=]"</p> <h1>Kaggle dataset text:</h1> <p>"||i|like|yours|too|||i|enjoy|your|photography.|=]" <code>`` So, we could observe 2 extra spaces in the beginning of the tweet, and 2 extra spaces in the middle, 4 in total. Probably, the actual selected text was</code>"enjoy"<code>, while in the dataset it is a shifted version:</code>"i enj"`.</p> <h2>Final model</h2> <p>After this finding problem is transformed from modeling into properly utilizing this data property. Firstly, we've built postprocessing that gave 0.721 Public LB for a single 5-fold Roberta.</p> <p>Then, we changed a problem a little bit. Firstly, try to inverse this labeling transformation on the train data to get cleaner labels. Then fit the model on clean data, and transform predictions back to the noisy labels with a postprocessing. Such an approach allowed us to get 0.723-0.724 Public LB scores with a 5-fold Roberta model. And the validation score was closely following the Leaderboard.</p> <p>Other parts of our solution include: - Using weighted sum of cross-entropy losses (start and end) and lovasz loss for a span prediction - Use pseudo-labels from the Sentiment140 dataset - Ensemble multiple models. Our best submission includes 18 5-fold models: 8 Robertas; 4 Electras; 6 Berts Large</p> <p>It allowed to get 0.731 Public LB and 0.728 Private LB.</p> <p>To conclude, it was a nice experience cracking the problem, while I haven't tried a lot of new NLP stuff.</p> <p>Also, I want to thank the <a href="https://www.hostkey.com/">hostkey</a> provider for giving me a <a href="http://landing.hostkey.com/grant_for_winners">grant</a> on using the machine with 4x1080TIs for this competition. It was a smooth experience, highly recommend it!</p>
Jigsaw Multilingual Toxic Comment Classification
19th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to <a href="/rafiko1">@rafiko1</a> and <a href="/leecming">@leecming</a> for winning the first place, besides being in the top of the public LB for the longest time, and <a href="https://jigsaw.google.com/">Google Jigsaw</a> for hosting the third edition of toxicity classification challenge.</p> <p>We decided to divide our models into 3 parts. Number is listed alongside each model to denote what part they belong to.</p> <h2>Models:</h2> <ul> <li>XLM_RoBERTa_Base with different architectures/epochs/learning_rate/augmentation (I)</li> <li>XLM_RoBERTa_Large with different architectures/epochs/learning_rate/augmentation (I)</li> <li>XLM_RoBERTa_Large_MLM_training (I)</li> <li>Multilingual_BERT (cased and uncased variants) (II)</li> <li>BERT (cased and uncased variants) (II)</li> <li>RNN: pooled Bidirectional GRU with fasttext aligned word vector embeddings on translated train (III)</li> <li>RNN: pooled Bidirectional LSTM + GRU + attention with fasttext + glove + paragram embedding averaged on translated test (III)</li> <li>WeakLearner: NB-SVM on translated train as well as translated test averaged (III)</li> <li>WeakLearner: SGD on translated train as well as translated test averaged (III)</li> </ul> <p>Overall, there were a combination of about 50 models with the base models belonging to one type (parent) averaged. So, all varieties of XLM_RoBERTa_Base models were averaged and so on.</p> <p><strong>First-Level Blending procedure:</strong></p> <p>&gt; (I) -&gt; (XLM_RoBERTa_Base * 0.1) + (XLM_RoBERTa_Large * 0.5) + (XLM_RoBERTa_Large_MLM_training * 0.4)</p> <p>&gt; (II) -&gt; (Multilingual_BERT * 0.2) + (BERT * 0.8)</p> <p>&gt; (III) -&gt; (RNN * 0.6) + (WeakLearner * 0.4)</p> <p><strong>Final Blending procedure:</strong></p> <p>&gt; <code>toxic</code> = (I) * 0.8 + (II) * 0.1 + (III) * 0.1</p> <h2>Post Processing: (0.0005 boost)</h2> <p>I think this is the major highlight of our approach. Just a day before the competition finish, <a href="/veryrobustperson">@veryrobustperson</a> discovered that most of our lower scoring submissions were over-predicting while compared to the higher scoring submissions. Based on this hypothesis, we tried to introduce a re-scaling factor for predictions &gt;0.8 as well as &lt;0.01 through the use of probabilistic random noise that introduces a small penalty. I am sure there was a lot to explore here, but due to shortage of submissions, and lack of time, we couldn't delve deeper into it and optimize further. For two different submissions that we tried, the score increased by about 0.0005, so we thought it would generalize well even on the private LB test data. Luckily, it did. </p> <h2>Things that didn't / couldn't make it to work:</h2> <ul> <li>pseudo labelling by making &lt; 0.2 -&gt; 0 and &gt; 0.8 -&gt; 1.</li> <li>training a gradient boosting algorithm on the 1024 laser embeddings.</li> <li>hard-coding test probabilities to ground truth in validation as we found about 1000 samples overlapping based on cosine similarity of laser embeddings.</li> <li>changing the optimal threshold of toxic probabilities in <code>jigsaw-unintended-bias-train.csv</code> to a value ranging from 0.2 to 0.3, as 0.5 (default rounding) looked extremely non-toxic centric.</li> <li>using external datasets for hatespeech, toxic word list for 6 languages.</li> <li>label smoothing. </li> <li>Wordbatch FM_FTRL.</li> <li>power averaging of base models.</li> </ul> <p>Lastly, I thank my team-mates <a href="/veryrobustperson">@veryrobustperson</a> and <a href="/ipythonx">@ipythonx</a> for the wonderful discussions we've had and coming up with newer ideas from time to time. We almost made it to the gold zone :)</p> <p>Hope you all had fun competing! </p> <p>cheers, Nickil</p>
Tweet Sentiment Extraction
60th place (silver medal) solution (+ GitHub)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey everyone, I am very excited here, this will be my competition first silver medal. Like many others I have been here from the very beginning, I am very exhausted, and after the Jigsaw competition will take a few months away from NLP competitions 😄 . I have made almost 300 experiments (here is <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/tree/master/Model%20backlog">the list</a>) and there is a buch of extra stuff in this <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction">Git repository </a>, again I would like to encourage others to do something similar, this will help other people learning and it also helps me a lot solving future similar problems.</p> <p>I will talk about my best model that achieved 0.720 at the private leaderboard but wasn't the one chosen 😓.</p> <h2>Quick summary</h2> <ul> <li>Model: 5-Fold <code>RoBERTa base</code>. <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/blob/master/Model%20backlog/Train/260-Tweet-Train-5Fold-roBERTa%20mask%20and%20span%20OneCycle2.ipynb">training notebook here</a></li> <li>Tasks: Predicted start/end indexes and mask span <code>(details below)</code></li> <li>Dataset: Removed about 30 noisy samples, and samples with Jaccard score between <code>text</code> and <code>selected_text</code> == 0, found that they were very noisy. <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/blob/master/Datasets/tweet-dataset-5fold-roberta-64-clean.ipynb">dataset creation here</a></li> <li>Post-process: Removed words that were not present in <code>text</code> field. <code>(details below)</code> <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/blob/master/Model%20backlog/Inference/260-tweet-inference-5fold-roberta-mask-and-span-on.ipynb">inference notebook here</a></li> <li>Framework: <code>Tensorflow</code> only GPUs</li> </ul> This is a metrics table that I used to compare my models (this one is from the 0.720 model). <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1182060%2Ffebe1288f70dc51136a094407ad6bfd3%2FScreenshot%20from%202020-06-16%2021-37-24.png?generation=1592354294790540&amp;alt=media" alt=""></p> <h2>Detailed summary</h2> <h3>Model &amp; training</h3> <ul> <li>5-Fold <code>RoBERTa base</code></li> <li>Public leaderboard: <code>0.715</code></li> <li>Private leaderboard: <code>0.720</code></li> <li>Sequence length: <code>64</code></li> <li>Batch size: <code>32</code></li> <li>Epochs: <code>2</code></li> <li>Learning rate: <code>1e-4</code></li> <li>Training schedule: One cycle (from 0 to 1e-4 then to 1e-6)</li> <li>Target: Start/end indexes and masked span (token level)</li> <li>Used the pooled <code>11th layer</code></li> <li>Dense layer without <code>bias</code></li> <li>Label smoothing: <code>0.2</code> (for all tasks)</li> <li>Losses: <code>CategoricalCrossentropy</code> for start/end and <code>BinaryCrossentropy</code> for maks (used logits)</li> </ul> <p>I got pretty good results using one cycle schedule for training, the first time I tried one cycle got me 0.718 on the public leaderboard. My model had 3 outputs, 1 for each span index (start and end) and an extra output for the masked span, this means if we had a sequence of size 6, the start span is index 2 and end span is index 4, the masked span would be <code>0 0 1 1 1 0</code>, makes sense? I thought this would give the model a more general feeling about the target.</p> <h3>Dataset</h3> <ul> <li>Nothing fancy here dropped <code>null</code> samples, removed about 30 samples that I thought were very noisy, and after some study, I found that samples with very low Jaccard score between <code>text</code> and <code>selected_text</code> were noisier, so I removed them.</li> <li>I would like to say here that at the beginning I made a lot of mistakes with the tokenizers, so for every dataset that I created I have added some tests, check it out!</li> <li>5-Fold split using <code>sklearn StratifiedKFold</code> and <code>sentiment</code> as the target, like many others.</li> <li>To save time for each dataset I was creating a bunch of outputs.</li> <li>I have used <code>ByteLevelBPETokenizer</code> as the tokenizer for RoBERTa</li> <li>Lowercased text</li> </ul> <h3>Inference</h3> <ul> <li>Summed all 5 model's logits and then used <code>argmax</code> to get start/end indexes.</li> <li>The post-process was very trivial, I just removed words that were not present in the <code>text</code> field, this helped to remove word-pieces that the tokenizer created and got predicted.</li> <li>Empty predictions got filled the whole <code>text</code> field.</li> </ul> <p><strong>This is how my model looked like:</strong> ``` input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')</p> <p>base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model') _, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})</p> <p>h11 = hidden_states[-2]</p> <p>logits = layers.Dense(3, use_bias=False, name='qa_outputs')(h11)</p> <p>start_logits, end_logits, mask_logits = tf.split(logits, 3, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1, name='y_start') end_logits = tf.squeeze(end_logits, axis=-1, name='y_end') mask_logits = tf.squeeze(mask_logits, axis=-1, name='y_mask')</p> <p>model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits, mask_logits]) ```</p>
Tweet Sentiment Extraction
Quick 3rd place solution (no pp) + source code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My solution on github: <a href="https://github.com/suicao/tweet-extraction/">https://github.com/suicao/tweet-extraction/</a> Will update later when <a href="/christofhenkel">@christofhenkel</a> wakes up, his old body needs that sleep.</p> <p>The main ingredients for our solution are: - Character level model with GRU head (Dieter) - Normal model with beamsearch-like decoder (mine) - Diversity rules - RoBERTa base + RoBERTa large + BART large.</p> <p>We knew what the magic was but couldn't find a reliable post processing method for it. Soon we looked into the predictions and realized that the models did a decent job of overfitting the noise anyway, and focused on that direction. In the end, each of our method scored ~0.728 public LB and could've been in the gold zone.</p> <p>For my part: - I used fuzzywuzzy to fuzzy match the labels in case the it was split in half. - For modeling, I copied XLNet's decoder head for question answering to RoBERTa. Basically you predict the start index, get the <em>k</em> hidden states at the <em>top-k</em> indices. for each hidden state, concat it to the end index logits and predict the corresponding <em>top-k</em> end indices. - The best <em>k</em> is 3, for whatever reasons, which resulted in a 3x3 start-end pairs. I ranked them by taking the product of the two probs. - Nothing too special regarding training pipeline. I used the fastai style freeze-unfreeze scheme since the head is quite complicated. - Everything with RoBERTa byte-level BPE works best. BERTweet was quite a pain to work with (it also used og BERT's subword BPE for some reason), we didn't include it in the final ensemble. - Probably for the reason above, XLNet's performance was quite subpar, a shame since it's my favorite.</p> <p>Updated: added source code.</p>
Jigsaw Multilingual Toxic Comment Classification
75th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations everyone! First of all my teammates @rftexas @tanulsingh077 and @ibtesama was really awesome and supportive.</p> <p><strong>Overview of our models</strong></p> <ol> <li><p>Fold CV with 250k translated data and 8k validation dataset using custom pre-trained XLM Roberta large version. ( score : 0.9433). </p></li> <li><p>Single fold normal training XLM large and prediction on Test Time Augmentation (TTA). Test translated to 6 languages and avg predictions.( score : 0.9427)</p></li> <li><p>Single fold training on pre-trained XLM large with Multisample dropout architecture.(score 0.9417)</p></li> <li><p>Three Stage training on training data. - ( Train on English -&gt; inference) - (Train on translated lang -&gt; inference) - (Train on validation -&gt; inference) Finally weighted average of these predictions gave us (0.9399)</p></li> <li><p>Two-stage XLM Base with 400k samples and TTA. (0.9363)</p></li> </ol> <p><strong>Tricks that worked</strong></p> <ul> <li>Token length = 224</li> <li>Balanced the dataset in language distribution.</li> <li>Balanced the dataset in target class distribution ( 125k for each class)</li> <li>Head + tail encoding.</li> <li>Learning Rate scheduling.</li> <li>Label Smoothing.</li> </ul> <p>Finally blending using <a href="https://www.kaggle.com/paulorzp/gmean-of-light-gbm-models-lb-0-951x">GMEAN blend.</a></p>
Tweet Sentiment Extraction
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to Kaggle for hosting this competition! It was quite a fun, but very exhausting ride and I am very happy with achieving my goal of solo gold!</p> <h3>Data</h3> <p>As we all have seen, there is quite some noise in the data. Many people found pre- or post-processing techniques for dealing with it. Others had really quite clever model adaptions to address it. I personally think there are actually two types of facets in the data. Let us discuss the following example:</p> <p><code>That's awesome!!!</code> --&gt; <code>s awesome!</code></p> <ol> <li><p>Let us first focus on the second selected word <code>awesome!</code>. Here, we can see stark differences in how the end (special) characters of words are selected. I believe this is due to human annotators having different preferences, and is real noise in the data. I personally would not directly know if I select all exclamation marks, only one, or none. So that there is difference in the data seems natural to me.</p></li> <li><p>Unfortunately, there appears to also be a form of leak in the data (see first word). When the data was first fixed, we learned that the labeling service only provided the indices back, but did some preprocessing on their own. We learned that the escaped HTML tags and that caused a shift in the indices. Unfortunately, this was not the only thing they did, they also apparently replaced multiple spaces with single ones, which again lead to wrong indices being used by Kaggle.</p></li> </ol> <p>I personally think the first case is interesting to model, and the second case is based on wrong data processing upfront. I was identifying the second issue quite early, but just could not find a reliable way to find a PP that works. I don't think it is a consistent error, as it is not always the case, as for example imminent from neutral data where you have a nice baseline with selecting the full text. Many times if there is a space in the beginning, the selected text is still the full text. So maybe there are also two or more labeling processes in play, some are correct, some not. But I just could not figure it fully out.</p> <h3>Tokenization</h3> <p>I decided to focus on tokenization to address some of these issues. With default Roberta tokenization, you can always only predict <code>!!!</code> as those tokens are combined into a single one. This heavily limits your model to learn the noise and differences here, and if majority of people select a single exclamation mark, you won't be able to predict it, except you do some PP. What I did though is to modify the <code>merges.txt</code> file of the tokenization. This one has for example a line <code>!! !</code> which means that those subsequent character spans are always merged to a single token. I started removing all those lines with dots and exclamation marks, and got good boosts on CV quite away. In the end, I removed all special character merges:</p> <p><code> f = open("merges12.txt", 'w') for line in open("merges.txt"): cols = line.strip().split() if cols[0] == "Ġ": if cols[1].isalpha() or len(cols[1]) == 1: f.write(line) elif cols[0].isalpha() and cols[1].isalpha(): f.write(line) </code></p> <p>I wrote more about it in a separate thread: <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159364">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159364</a> and feel free to try it out: <a href="https://www.kaggle.com/philippsinger/roberta-base-squad2">https://www.kaggle.com/philippsinger/roberta-base-squad2</a></p> <p>By doing so, I tried to increase the jaccard score I can get on the original selected text with given tokenization, which is out of the box not perfect. You can remove as many merges as you want, and can get 1.0 jaccard score, by e.g., further removing merges like <code>Ġg onna</code>, but in the end this didn't boost my scores, so I sticked to just the special characters.</p> <p>I believe my approach is very good at dealing with the real noise in the data (point 1 from above), but not so good with dealing with the artificial noise (point 2 from above). What helped with the artificial one is to keep all spaces intact, so the model learns it a bit. But top solutions handled this way better with either extra heads or post processing.</p> <h3>Models</h3> <p>I tried quite a bunch of models, but as for most of us Roberta base worked fine enough. I got a tiny boost by using a version pre-trained on Squad2. My head is pretty standard, with the exception that I added a trick I read in <a href="https://www.kaggle.com/c/tensorflow2-question-answering/discussion/127545">TF competition</a>: I concatenate the output of the stark token representations to the input of the end token representations, so the end layer knows what start token was predicted.</p> <p><code> start_logits = self.l1(o) start_token = torch.gather(o,1,start_logits.argmax(dim=1, keepdim=True).repeat(1,1,o.size(2))) o2 = torch.cat([o,start_token.repeat(1,o.size(1),1)], dim=2) end_logits = self.l2(o2) </code></p> <p>I fit models with Radam or AdamW over 5-8 epochs. In epoch 1 I keep learning rate constant, and after that I do cosine decay. My starting transformer LR is 1e-4.</p> <p>A "trick" I found is to only fit on all data on first epoch, and then drop neutral data and only train on positive and negative. For neutral data I fit separate models early stopping on neutral eval only. In general, I do early stopping, but sometimes also fixed epochs, and different types of cvs.</p> <p>As I always do, I never looked at single model OOF scores to make judgements, but always fitted each fold at least 5 times and blended the result. This is way better indicative of test submissions you do (where you at least blend all folds) and also gave me a quite good understanding of the random range. </p> <h3>Pseudo tagging</h3> <p>I had high hopes in the beginning for pseudo tagging in this competition, but it was way less helpful than I thought it would be. At first I was hopeful when adding public test to training boosted me from 724 to 728 on the first attempt, but it was lucky. The models are in general really, really good in memorizing here, so for example if you add 728 to train, and then predict again on test, you will always be in the 728 range.</p> <p>However, based on that I found out that you can use it well to compress models. So for example, you can run 5-bag CV, blend them, predict and then feed this together with train data to another model, and the model is good at predicting unseen data with fewer fits.</p> <h3>Final subs</h3> <p>My final subs are heavy blends on many, many model fits. To bring diversity to my subs, one of them is using a blend of regular models, and another one on only pseudo models. Both use the same models for neutral data. The first one is using also an adaption of <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/158613">JEM</a> that was posted. All diversity efforts were quite useless in the end, as both subs scored basically the same and both are under the top 4 subs of my potential private LBs.</p> <h3>Things that didnt work</h3> <p>There are too many things, and I am tired to even think of them, but to mention a few:</p> <ul> <li>changing the loss function</li> <li>replacing <em>*</em>* with original swear words</li> <li>other models</li> <li>pretraining on twitter data or finetuning language models</li> <li>including the extra sentiment data</li> <li>any of my post-processing attempts</li> </ul>
Tweet Sentiment Extraction
A Noob's Story : Silver in very First Competition (My Magical Experience)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>When this competition first started on March 24th , I had never worked on NLP problem before , not even the basic ones using count-vectorizer and tf-idf . This competition was unique and it pulled me towards itself , at that time I was just getting started with Kaggle and my plan was to just write kernels and never have I ever dreamed of even participating in a Kaggle competition this early in my Kaggle Journey. </p> <p>I wrote a simple EDA notebook and dived right in , from there my learning curve began . I got an overwhelming support from community and my notebook became the most upvoted for the competition . This gave me the confidence to start thinking of joining the competition .</p> <p>I learned everything I currently know about NLP during the course of this competition from lovey kernels written by Grandmasters and masters , from discussion forums, other useful resources . One thing led to another and after three months I have done three different NLP projects involving different kinds of transformer based models , became a Kaggle notebooks master and today the most precious of it all got a silver medal in my first competition finishing in top 2 percent (44th place). I am just very happy right now and this medal means a lot to me, it's like a dream coming true . I have come a long way and I hope this learning curve continues</p> <p>From March 24th I have worked 9hours/day on Kaggle and machine learning in general and I can now see that if you are willing to learn , miracles do happen . People have been claiming to find magic in this competition , well I have found something entirely else , a magic for my life .</p> <p>A very heartily congratulations to all those who survived the shakeups and won a medal , I don't know whether the shakeup is fair or not as a lot of people have got lucky and few unlucky, but that's how life is .</p> <p>I would also want to thank few people without which this could not have been possible * <a href="/abhishek">@abhishek</a> for the wonderful tutorials , tihis is a result of your public notebooks * <a href="/cdeotte">@cdeotte</a> for wonderful insights and kernels * <a href="/adityaecdrid">@adityaecdrid</a> for wonderful thread of things which can help improve score * <a href="/cpmpml">@cpmpml</a> for posting the failure thread , it motivated me a lot , looking forward to learn more from you and congratulations</p> <p>Along the way I found the best team mate ever <a href="/rftexas">@rftexas</a> , who is now like a brother to me,thanks for everything , I have learned a lot from you and also @mehdi who was a valuable addition to the team</p> <p>As this noob continues to learn , i will try to keep writing new and informative kernels . I thank the community to be kind and let noobs like me flourish , also I am sharing what I feel , if you downvote please let me know what I did wrong</p>
Jigsaw Multilingual Toxic Comment Classification
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to kaggle for hosting such an interesting and challenging competition! Multilingual NLP is something very interesting yet difficult. In hindsight I (Dieter) wish I’d spend less time on the tweet sentiment extraction competition and more on this one. </p> <h1>Brief Summary</h1> <p>Our solution is a simple blend of several transformer models (mainly xlm-roberta-large) paired with some post-processing. The architecture was the same as in public available kernels with the classification head taking either max+mean pooling of hidden states or the hidden state of the CLS token. We used a 3-step approach for training our models, where starting from 7 languages we fine-tuned to 3 languages and finished at a single language. As this 3-step approach results in distortion of global predictions we shifted languages individually by a factor as a post-processing step. </p> <h1>Detailed Summary</h1> <p>We are still astonished by the great result in such a short time. While @aerdem4 worked a bit longer on this competition and already had a good understanding of specific challenges @cpmpml and @christofhenkel joined right after the tweet sentiment competition a few hours before team merger deadline. Two days ago we were still at a 100+ spot and were hoping for a silver medal at best. But with persistence and the right amount of intuition what ideas to proceed with, and of course some luck we managed to climb right to the upper gold position.</p> <h2>Preprocessing</h2> <p>None. That's NLP in 2020 :D</p> <h2>Speeding up training</h2> <p>We did two things. One was to downsample negative samples to get a more balanced dataset and a smaller dataset at the same time. If you only use as many negative as positive then dataset is reduced 5x roughly, same for time to run one epoch. Some of our models were trained that way.</p> <p>Another speedup came from padding by batch. The main idea is to limit the amount of padding to what is necessary for the current batch. Inputs are padded to the length of the longest input in the batch rather than a fixed length, say 512. This is now a well known technique, and it has been used in previous competitions. It accelerates training significantly. <br> We refined the idea by sorting the samples by their length so that the samples in a given batch have similar length. This reduces even further the need for padding. If all inputs in the batch have the same length then there is no padding at all. </p> <p>Given samples are sorted, we cannot shuffle them in training mode. We rather shuffled batches. This yields an extra 2x speedup compared to the original padding by batch. Training one epoch for xlm-roberta-large on the first train set takes about 17 minutes on a V100 GPU.</p> <h2>Cross-validation</h2> <p>As time was short and hence we did not have many submissions to spare for getting LB feedback we worked on getting a reliable cross-validation. We used a mix of Group-3fold per language and simple 3fold of the validation data to represent the unknown languages ru,pt,fr as well as the known languages it, es and tr. which results in a 6fold scheme. So do for example fold1 represent what if es would not be in valid.csv which should behave the same as pt is not in valid. Taking the mean AUC of all folds was a very good proxy for Public and Private LB.</p> <h2>Architectures</h2> <p>As most teams we used xlm-roberta large as the main backbone with a simple classification head that either uses max+mean pooling or the hidden states of the CLS token. In total we used the following backbones with their approximate weighting in our final ensemble in brackets.</p> <ul> <li>5x xlm-roberta-large (85%)</li> <li>1x xlm-roberta-base (5%)</li> <li>1x mBart-large (10%)</li> </ul> <h2>Training strategy</h2> <p>One main ingredient for our good result is a stepwise finetuning towards single language models for tr, it and es. Let us illustrate the whole approach before explaining:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2Fd03ded105462fb5ba610393ba97e4ad8%2Fjigsaw%20arch2.001.jpeg?generation=1592909197036996&amp;alt=media" alt=""></p> <p>As input data we used the english jigsaw-toxic-comment-train.csv and its six translations available as public datasets which combined are roughly 1.4M comments. I realized that training a model directly on the combined dataset is not optimal, as each comment appears 7x in each epoch (although in different languages) and the model overfits on that. So I divided the combined dataset into 7 stratified parts, where each part contains a comment only once. For each fold we then finetuned a transformer in a 3step manner:</p> <ul> <li>Step 1: finetune to all 7 languages for 2 epochs</li> <li>Step 2: finetune only to the full valid.csv which only has tr, it and es</li> <li>Step 3: 3x finetune to each language in valid resulting in 3 models</li> </ul> <p>We then use the step1 model for predicting ru, the step2 model for predicting pt and fr and the respective step3 models for tr, it and es. Using the step2 model for pt and fr gave a significant boost compared to using step1 model for those due. Most likely due to the language similarity between it, es, fr and pt.</p> <p>We used mainly a batchsize of 32 using gradient accumulation and a learning rate of 3e-6 with linear decay with AdamW optimizer. One thing worth mentioning is that we train on a max sequence length of 512 due to the dataloader mentioned above which sorts the data by length and then randomly serves batches of same length comments. Only the huge speed-up paired with the non padding of shorter comments made using a 512 max length reasonable. </p> <p>Apart from one model which was built on top of a public kernel, all models were trained using pytorch and GPU.</p> <h2>Ensembling</h2> <p>As for ensembling we did team member individual ensembling and then combined the models of the team members and public kernels by weighted sum of rank percentile </p> <h2>Post-processing</h2> <p>Another key ingredient, which we luckily found on the last day is post processing of the final predictions. Although languages are derived from individual models, the competition metric is sensitive to the global rank of all predictions and not language specific. So we took care of the languages having a correct relation to each other, by shifting the predictions of each language individually. In our final submission for example we used the following factors </p> <p><code> test_df.loc[test_df["lang"] == "es", "toxic"] *= 1.06 test_df.loc[test_df["lang"] == "fr", "toxic"] *= 1.04 test_df.loc[test_df["lang"] == "it", "toxic"] *= 0.97 test_df.loc[test_df["lang"] == "pt", "toxic"] *= 0.96 test_df.loc[test_df["lang"] == "tr", "toxic"] *= 0.98 </code></p> <p>We derived the factors by matching the test prediction mean with the public LB mean for each language individually. We had already made 6 submissions for probing the test set by setting one language to 1 and the rest to 0. Then we made sure that means are aligned to give similar experimental AUC. That posprocessing moved us from 14th place to 4th place on public LB! </p> <p>While this postprocessing might also help other teams, we think it specifically fixes the divergence of global prediction distributions introduced by having 5 models to predict 6 languages, and might not help much if a team used a single model approach.</p> <p>Thanks for reading. Questions welcome.</p>
Tweet Sentiment Extraction
5th place solution & magic
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, All First of all, congrats winners! We were a little bit behind you guys. We enjoyed this competition very much. Due to the magic, this competition was a little bit different from standard ML competitions. It makes us feel solving some kind of puzzle although It was also fun for us</p> <h1>The Magic</h1> <p>I guess that this is just a bug introduced when they created this task. Here shows a representative example. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F436401%2F1bcbc6606aa7a3e112698c5ddfac12ad%2FScreen%20Shot%202020-06-16%20at%205.37.10%20PM.png?generation=1592354249239961&amp;alt=media" alt=""></p> <p>The given original annotation is “onna” but it is too weird. The true annotation should be “miss” (this is a negative sentence). We think that the host applied a wrong slice obtained on the normalized text without consequence spaces for the original text with plenty of spaces, emojis, or emoticons. Thus, this competition pipeline should be as follows.</p> <ul> <li>Recover true annotation from the buggy annotation (pre-processing).</li> <li>Train model with true annotation.</li> <li>Predict the right annotation.</li> <li>Project back the right annotation to the buggy annotation (post-processing).</li> </ul> <p>We call the pre-processing and post-processing as magic. After we found that, our score jumped from 0.713 to 0.721. Maybe, we can also do the 4 steps with an end-2-end model as some people claimed that they did not use any postprocessing. From 0.721 to 0.731, we improved individual models, ensembled models, and improved the pre- and post-processing.</p> <h1>Model</h1> <ul> <li>We use RoBERTa and BERTweet.</li> <li>Ensembled all 5 member’s models in the char-level.</li> <li>We do not do special things in our model training.</li> <li>We only use train.csv. (no sentiment140 and complete-tweet-sentiment-extraction-data)</li> </ul> <h1>Post-process improvement</h1> <p>Assuming the model (token level) is perfect, we maximize the Jaccard score with the pre- and post-process. This is an example. <a href="https://www.kaggle.com/tkm2261/pre-postprosessing-guc">https://www.kaggle.com/tkm2261/pre-postprosessing-guc</a></p> <p><code> &amp;gt;&amp;gt;&amp;gt;&amp;gt; FOLD Jaccard all = 0.9798682269792468 &amp;gt;&amp;gt;&amp;gt;&amp;gt; FOLD Jaccard neutral = 0.9961648726550028 &amp;gt;&amp;gt;&amp;gt;&amp;gt; FOLD Jaccard positive = 0.969571077575057 &amp;gt;&amp;gt;&amp;gt;&amp;gt; FOLD Jaccard negative = 0.96793968688976 </code></p> <p>Under the perfect model assumption, this result can be interpreted that we can achieve 0.9798682269792468 if the model is perfect. Then, we apply this postprocessing for our model prediction. This pipeline worked pretty well. Any members can test their own postprocessing idea and share it with team if it improves the score.</p> <p>In summary, to our knowledge, this competition is a competition that reproduces embedded human errors. When I found the magic, I am also disappointed a little bit. But, maybe, noticing such bugs in data should be one skill of DS. We should not just apply model but dive into data carefully.</p> <p>If you have any questions, plz feel free to post it in this thread.</p> <p>Thank you.</p>
Jigsaw Multilingual Toxic Comment Classification
43rd place. Sampling with Adversarial Validation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and all kagglers who were generous enough to share their expertise. The learning was rich thanks to: Dezso Ribli's <a href="/riblidezso">@riblidezso</a> - <a href="https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm">Finetune XLM-Roberta on Jigsaw test data with MLM</a>, Alex Shonenkov <a href="/shonenkov">@shonenkov</a> with Pytorch <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">[TPU-Training] Super Fast XLMRoberta</a>, DimitreOliveira <a href="/dimitreoliveira">@dimitreoliveira</a> - <a href="https://www.kaggle.com/dimitreoliveira/jigsaw-tpu-optimized-training-loops">Jigsaw - TPU optimized training loops</a>, Abhishek <a href="/abhishek">@abhishek</a> with his invaluable 'real-time coding' youtube videos and <a href="https://www.kaggle.com/abhishek/i-like-clean-tpu-training-kernels-i-can-not-lie">I Like Clean TPU Training Kernels &amp; I Can Not Lie</a>, Xhlulu <a href="/xhlulu">@xhlulu</a> - <a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">Jigsaw TPU: XLM-Roberta</a>, Michael Kazachok’s <a href="/miklgr500">@miklgr500</a> translations <a href="https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api">dataset</a> and <a href="https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-with-huggingface-and-keras">Jigsaw TPU: BERT with Huggingface and Keras</a></p> <h3>Adversarial Validation.</h3> <p>The main difference of the solution is the way the training and validation sets were sampled. I used adversarial validation in this <a href="https://www.kaggle.com/isakev/jigsaw-adversarial-validation-folds-0-1">kernel (previous version)</a> to sample <a href="https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api">translations</a> by Michael Kazachok to pick samples ‘most similar to test set’ (280k-480k samples for training and 4k samples for validation (+8k original validation.csv). Adversarial Validation, the idea successfully used often across Kaggle (e.g. <a href="https://www.kaggle.com/tunguz/quora-adversarial-validation">Quora Adversarial Validation </a> and <a href="https://www.kaggle.com/konradb/adversarial-validation">Adversarial validation</a></p> <ul> <li>selecting samples this way, at least, did not make the performance worse than lucky picks of random sampling</li> </ul> <h3>The rest.</h3> <p>The best submission is the blend of predictions of 6 models, all Roberta-XLM-large MLM, 3 of them are based on the MLM finetuned to test set by <a href="/riblidezso">@riblidezso</a> Dezso Ribli (link)[https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm]. Diversity to models comes from different loss functions (apart from BCE, used focal loss and MSE (with soft labels)), 3 models with 4 top hidden layers’ outputs concatenated, 1 model sums those 4 layers. 3 models use differential learning rates for head and transformers as in <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">this kernel</a>, different lengths 192 and 256, for those samples that exceed max_len=192 - concatenating the last 25%*max_len of text to the beginning of text,, Alex Shonenkov’s <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">kernel</a> with addition of opensubtitles data, training on english text and without, using training data from 200k to 480k filtered by similarity to test set and/or similarity by language ditribution and/or similarity by ‘toxic’ target distribution.</p> <p>All were run in tensorflow and keras on TPU only. The GPU had been used for predictions only, or as in the case of Adversarial Validation when smaller Roberata-base model was employed.</p>
Tweet Sentiment Extraction
21st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone, first of all i would like to thank my great teammates <a href="/ryoya0902">@ryoya0902</a>, <a href="/mks2192">@mks2192</a>, <a href="/shahules">@shahules</a> and <a href="/kpriyanshu256">@kpriyanshu256</a> for such an interesting competition journey. And congratulations to all the winners!</p> <p>It's been a great competition, and we have spend a lot of time in this competition and finally glad to share that all the hard work paid off.</p> <h2>Quick Summary of things that worked</h2> <ul> <li>RoBERTa base as our base model with MAX_LEN = 168</li> <li>Used preprocessing (this boost our local CV from 0.711 to 0.717)</li> <li>Trained for 8 folds (8 fold was giving better CV in our case)</li> <li>At last we used the so called magic (Post Processing) (boost our LB from 0.717 to 0.724)</li> <li>Also validating multiple times per epoch has also improved our CV.</li> </ul> <h2>Things that didn't worked</h2> <ul> <li>We tried to solve this problem as NER, unsupervised text selection, they didn't worked.</li> <li>We tried various model architectures, but none of them worked well except <code>roberta-base</code>.</li> <li>We tried training seperate model for each sentiment but that didn't worked as well.</li> <li>We also tried a lot of preprocessing techniques like removing all the noisy samples, cleaning text, etc but none worked.</li> <li>We also tried BERTweet, it also didn't work for us.</li> <li>We also tried augmentations like Synonym replacement, etc.</li> <li>Augmentation kernel can be found <a href="https://www.kaggle.com/rohitsingh9990/data-augmentation-by-synonym-replacement">here</a></li> </ul> <h2>Preprocessing that worked</h2> <p>We know that there is huge amount of noise in samples where some extra space is present in between words, for example:</p> <p>``` text: is back home gonna miss every one selected_text: onna after_preprocessing: miss</p> <p>``` <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3982638%2F374c9e6d4fa5521778f8268b26bb4f53%2FScreenshot%202020-06-17%20at%207.46.08%20AM.png?generation=1592360225973868&amp;alt=media" alt=""></p> <ul> <li>The catch was you need to shift start and end indices by the number of extra spaces.</li> <li>preprocessing kernel can be found <a href="https://www.kaggle.com/rohitsingh9990/preprocessing-that-worked?scriptVersionId=36556534">here</a></li> </ul> <h2>Training</h2> <ul> <li>After applying preprocessing, we trained our model for 8 folds</li> <li>We used the same model architecture as shared by <a href="/abhishek">@abhishek</a> &gt; Note: Need to clean the training kernel, will share soon.</li> </ul> <h2>Inference</h2> <ul> <li>after getting the <code>selected_text</code> from model we apply our <code>post processing</code>, it's nothing but just the reverse engineering of our preprocessing kernel shared above.</li> <li>it's almost similar to what's been discussed <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159245">here</a></li> <li>And finally the inference kernel can be found <a href="https://www.kaggle.com/rohitsingh9990/roberta-prepost-0-726">here</a></li> </ul> <p>At last i would like to thank <a href="/abhishek">@abhishek</a> and <a href="/cdeotte">@cdeotte</a> for their amazing starter kernels and their spirit of public sharing, which helped other kagglers a lot.</p>
Herbarium 2020 - FGVC7
10th place: instant collaboration
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, community! </p> <p>First of all I want to say thanks to organizers and participants for such interesting competition.</p> <p>Here I'll present the details of our model. This model is weighted ensemble of predictions of three. First two introduces my teammate, the third one me.</p> <p>&gt; I am Len(Leheng) Li, i introduce the part of network and my team mate Emil introduce model ensemble. Our main work is based on <a href="https://arxiv.org/pdf/1912.02413.pdf">BBN</a>(CVPR2020 oral, <a href="https://github.com/Megvii-Nanjing/BBN">code</a>), it's main contribution is dynamically paying attention to the tailed data. we than replace resnet50 with resnest101 and Efficient net b3. <img src="https://i.ibb.co/nfYTxVn/1.png" width="800"> For resnest101, we first center-crop 650*650 image and resize to 256*256. It got 0.61094. For Efficient net b3, we first center-crop 650*650 image and resize to 300*300. It got 0.60282. I am using 4*2080ti and 1*v100 16g, which is a drop in the bucket for this challenge. I only train my model in 20 epochs. I think performance will higher if I train more times.</p> <p>&gt; At last, many thanks to my team mate Emil, it’s the first time i collaborate with foreign people(China-Russia). I really enjoy the experience with Emil. And thanks for organizer host this challenge.</p> <p>And my part:</p> <p>&gt; ## 1. Model description I've tried to squeeze from se_resnext50_32x4d feature generator as much as possible. The main source of ideas was this A Bag of Freebies <a href="https://youtu.be/iRrDwzcxg08">video</a>. There were 32k of classes that hierarchically arranged in superclasses with smaller number of distinct labels. In my model I've pursued them all with different weights before loss. The loss was a weighted FocalLoss from this paper. I've analyzed in detail how much that improved performance, but it seems obvious that one should try to weight such unbalanced data. As info says:</p> <p>&gt; Each category has at least 1 instance in both the training and test datasets. Note that the test set distribution is slightly different from the training set distribution. The training set contains species with hundreds of examples, but the test set has the number of examples per species capped at a maximum of 10.</p> <p>&gt; I've collected statistics of mean and variance of whole dataset to normalize input images and that has given a little boost. The next improvement I've introduced was decaying optimizer learning rate. It starts from 30e-5 and divided by two each time every 5 or 10 steps. I've used Adam optimizator. There was an attempt to switch oprimizators during training from Adam to SGD and vice versa. I called this paradigm AdamEva optimizer. The attempt crashed against the constraints of pytorch-lightning framework that allow you to use several optimizers but each optimizer has it's own train call.</p> <p>&gt; Finally, about augmentations and TTA. Random crop, random flip of herbarium image improves performance but not a Color Jitter, that's my insight. For the TTA I've sampled 4 inputs. I've done 3 times train augmentation than 1 more time just normalization and resize. After that logits of these four samples have averaged to produce output logits.</p> <p>&gt; ## 2. What also hasn't work &gt; * Metric Learning I've not figured out how to do metric learning for such a big dataset with such amount of unique labels. I've engineered how to store this amount of data but haven't engineered a way to use it</p> <p>&gt; * Descriminator Network The main idea is to train a network to predict whether image from train or test set. This might help to create robust validation set as a part of train set. Good validation set gives you an ability to know how good your machine learning model is. Sounds obvious, but it isn't easy to find such when your dataset very disbalanced. By the way, I've used a random subset of a train as val set and it isn't right. In my opinion this is an extreme problem (problem of validation set choosing) to solve in first place in the future!</p> <p>&gt; ## 3. What went into production (best submissions) I've mixed our models with two approaches: averaging of probabilities and averaging of logits. The first hasn't worked well while the second showed better results than both of our networks.</p>
Jigsaw Multilingual Toxic Comment Classification
Second silver in one week
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations all, I am happy to belong to such an amazing community and happy to see new expert like me,new Master like my teammate <a href="/futureboykid">@futureboykid</a> and new GM. see you soon in another leaderboard !</p>
Tweet Sentiment Extraction
13th Place - From LB 710 to LB 726 - How To
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to my wonderful teammates Shai <a href="/sgalib">@sgalib</a>, Yasin <a href="/mykttu">@mykttu</a>, Cooleel <a href="/cooleel">@cooleel</a> . Together we achieved Gold and I am so excited to receive my 5th competition Gold and become <strong>Competition Grandmaster</strong> ! </p> <p>Also congrats to Shai <a href="/sgalib">@sgalib</a> for becoming Competition Grandmaster. And Cooleel <a href="/cooleel">@cooleel</a> for becoming Competition Master!</p> <h1>Single TensorFlow Roberta-base</h1> <p>Our best final submission is a single TensorFlow RoBERTa model. We start with my public notebook <a href="https://www.kaggle.com/cdeotte/tensorflow-roberta-0-705">here</a> which has CV 0.705, Public LB 0.709, Private LB 0.713. Then we make 10 changes to increase to CV 0.718, Public LB 0.724, Private LB 0.726</p> <h1>Validation</h1> <p>We tested dozens, maybe hundreds of ideas. Since training data was small, for each idea, we ran the local CV 10 times with 10 different K Fold random seeds and averaged the scores (that's 5 folds times 10 equals 50). Each change below increased CV average by at least 0.001</p> <h1>1. Do not remove extra white space.</h1> <p>The extra white space contains signal. For example if text is <code>"that's awesome!"</code> then selected text is <code>awesome</code>. However if text is <code>" that's awesome!"</code> then selected text is <code>s awesome</code>. The second example has extra white space in the beginning of text. And resultantly the selected text has an extra proceeding letter.</p> <h1>2. Break apart common single tokens</h1> <p>RoBERTa makes a single token for <code>"..."</code>, so your model cannot chose <code>"fun."</code> if the text is <code>"This is fun..."</code>. So during preprocess, convert all single <code>[...]</code> tokens into three <code>[.][.][.]</code> tokens. Similarily, split <code>"..", "!!", "!!!"</code>. </p> <h1>3. Underestimate train targets</h1> <p>Jaccard score is higher is you underestimate versus overestimate. Therefore if text is <code>" Matt loves ice cream"</code> and the selected text is <code>"t love"</code>. Then train your model with selected text <code>"love"</code> not selected text <code>"Matt love"</code>. All public notebook do the later, we suggest the former.</p> <h1>4. Modified Question Answer head</h1> <p>First predict the end index. Then concatenate the end index logits with RoBERTa last hidden layer to predict the start index.</p> <pre><code># ROBERTA bert_model = TFRobertaModel.from_pretrained('roberta-base') x = bert_model(q_id,attention_mask=q_mask,token_type_ids=q_type) # END INDEX HEAD x2 = tf.keras.layers.Dropout(0.1)(x[0]) x2b = tf.keras.layers.Dense(1)(x2) x2 = tf.keras.layers.Flatten()(x2b) x2 = tf.keras.layers.Activation('softmax')(x2) # START INDEX HEAD x1 = tf.keras.layers.Concatenate()([x2b,x[0]]) x1 = tf.keras.layers.Dropout(0.1)(x1) x1 = tf.keras.layers.Dense(1)(x1) x1 = tf.keras.layers.Flatten()(x1) x1 = tf.keras.layers.Activation('softmax')(x1) # MODEL model = tf.keras.models.Model(inputs=[q_id, q_mask, q_type], outputs=[x1,x2]) </code></pre> <h1>5. Use label smoothing</h1> <pre><code>loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.2) </code></pre> <h1>6. Mask words</h1> <p>Use data loader to randomly replace 5% of words with <code>[mask]</code> token 50264. Within your dataloader use the following code. We also maintain where the special tokens are so that they don't get replaced</p> <pre><code>r = np.random.uniform(0,1,ids.shape) ids[r&amp;lt;0.05] = 50264 ids[tru] = self.ids[indexes][tru] </code></pre> <h1>7. Decay learning rate</h1> <pre><code>def lrfn(epoch): dd = {0:4e-5,1:2e-5,2:1e-5,3:5e-6,4:2.5e-6} return dd[epoch] lr = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=True) </code></pre> <h1>8. Train each fold 100% data for submit</h1> <p>After using normal 5 fold and early stopping, note how many epochs are optimal. Then for your LB submission, run your 5 folds with the fixed epoch number you found using 100% data each fold.</p> <h1>9. Sample weight positive and negative</h1> <p>In TensorFlow Keras it is easy to make certain training samples more important. The normal output from <code>class DataGenerator(tf.keras.utils.Sequence)</code> is <code>(X,y)</code>. Instead output <code>(X,y,w)</code> where weight is the same shape as <code>y</code>. Then make <code>w=2</code> for all the positive and negative targets and <code>w=1</code> for all the neutral targets. Then train with the usual TensorFlow Keras calls</p> <pre><code>t_gen = DataGenerator() model.fit(t_gen) </code></pre> <p>And volia! CV and LB increase 0.001</p> <h1>10. Post process</h1> <p>The above 9 changes already predict much of the noise. For example the above has no problem with the following 2 examples. Text is <code>" that's awesome!!!"</code> with selected text <code>"s awesome!"</code>. And <code>" I'm thinking... wonderful."</code> with selected text <code>". wonderful"</code>. In each case, the model sees the leading double white space and extracts the single proceeding character.</p> <p>However the model cannot break a single letter off a word like text <code>"went fishing and loved it"</code> with selected text <code>"d loved"</code>. This would require breaking a <code>"d"</code> off of the word <code>"and"</code>. For these difficult cases, we use post process which increase CV 0.0025 and LB 0.0025</p> <pre><code># INPUT s=predicted, t=text, ex=sentiment # OUTPUT predicted with PP def applyPP(s,t,ex): t1 = t.lower() t2 = s.lower() # CLEAN PREDICTED b = 0 if len(t2)&amp;gt;=1: if t2[0]==' ': b = 1 t2 = t2[1:] x = t1.find(t2) # PREDICTED MUST BE SUBSET OF TEXT if x==-1: print('CANT FIND',k,x) print(t1) print(t2) return s # ADJUST FOR EXTRA WHITE SPACE p = np.sum( np.array(t1[:x].split(' '))=='' ) if (p&amp;gt;2): d = 0; f = 0 if p&amp;gt;3: d=p-3 return t1[x-1-b-d:x+len(t2)] # CLEAN BAD PREDICTIONS if (len(t2)&amp;lt;=2)|(ex=='neutral'): return t1 return s </code></pre> <h1>Other ideas</h1> <p>Our team tried tons of more ideas which may have worked if we spent more time to refine them. Below are some interesting things we tried: * replacing <code>****</code> with the original curse word. * using part of speech information as an additional feature * using NER model predictions as additional features * compare test text with train text using Jaccard and use train selected text when <code>jac &amp;gt;= 0.85</code> and <code>text length &amp;gt;= 4</code> . (This gained 0.001 on public LB but didn't change private LB). * pretrain with Sentiment140 dataset as MLM (masked language model) * pseudo label Sentiment140 dataset and pretrain as QA (question answer model) * Train a BERT to choose the best prediction from multiple BERT predictions. * Stack BERTs. Append output from one BERT to the QA training data of another BERT. * Tons of ensembling ideas like Jaccard expectation, softmax manipulations, voting ensembles, etc</p> <h1>Thank you</h1> <p>Once again, thank you to my wonderful teammates <a href="/sgalib">@sgalib</a> <a href="/mykttu">@mykttu</a> <a href="/cooleel">@cooleel</a> . And thank you to Kaggle for another fun competition.</p>
iWildCam 2020 - FGVC7
LB #1 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for the delay in posting our solution. Thanks to other teams for posting their solutions and the host for hosting this competition.</p> <p>We would like to share our solution with the competition participants and also the fine-grained recognition community.</p> <p>In general, we utilize Efficientnet-b6 and SeResNeXt101 as our backbone networks, several training methods and tricks are used to help the model to learn generalizable feature representations across camera locations. Note that, our method achieves 0.953 public and 0.906 private score, while the highest submission which we did not pick was 0.953 and 0.917 on public and private leaderboards, respectively. In details,</p> <ol> <li>A mixup strategy for the long-tailed data distribution which utilizes two data samplers, one is the uniformed sampler which samples each image with a uniformed probability, another is a reversed sampler which samples each image with a probability proportional to the reciprocal of corresponding class sample size. The images from those two data samplers are then mixed-up for better performance.</li> <li>An auxiliary classifier for different locations is added to our network, in the front of classification for location, the feature will go through a gradient reversal layer to ensure the model can learn a generalizable feature across locations.</li> <li>Adversarial training (AdvProp) was utilized to learn a noise robust representation.</li> <li>We ran MegaDetector V4 on all images.</li> <li>The bbox with the max confidence in each image is cropped to train a bbox model.</li> <li>We train two versions of each model, one with original full image and one with the cropped max confidence bbox.</li> <li>During test, for the image which has at least one bbox, the prediction is a weighted average of bbox model and full image model(0.3 full image + 0.7 bbox), for the images which have no bbox, we will use the full image model.</li> <li>We first obtain the predictions of Efficientnet-b6 and SeResNeXt101 using the aforementioned procedure, then the final prediction is a simple average of the two network architectures.</li> <li>We average the predictions which are clustered by location and datetime, as the sequence annotation appears to be noisy.</li> </ol>
Jigsaw Multilingual Toxic Comment Classification
100th place solution (+ GitHub)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I did not place so well this time, but still got a medal, so I think it still worth it to share my solution. Congratulation to all winners and new GMs. This was a very interesting competition, being able to work with huge datasets and models was very challenging. Luckily my best submission was the last one that I made 😄. I had many more Ideas to try including using external data but had no time. I also have a <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification">Git repository</a> with my experiments.</p> <h2>Quick summary</h2> <ul> <li>Model: 5-Fold <code>XLM_RoBERTa large</code> using the last layer <code>CLS</code> token. <ul><li>5-Fold <code>XLM_RoBERTa large</code> concatenated <code>AVG</code> and <code>MAX</code> last layer pooled with 8 multi-sample dropout.</li></ul></li> <li>Inference: I have used <code>TTA</code> for each sample, I predicted on <code>head</code>, <code>tail</code>, and a mix of both tokens.</li> <li>Labels: <code>toxic</code> cast to <code>int</code> then pseudo-labels on the test set.</li> <li>Dataset: For the 1st model I used a 1:2 ratio between toxic and not toxic samples, for the 2nd model use 1:1 ratio, also did some basic cleaning. (details below)</li> <li>Framework: <code>Tensorflow</code> with TPU.</li> </ul> <h2>Detailed summary</h2> <h3>Model &amp; training</h3> <ul> <li>1st model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Model%20backlog/Train/99-jigsaw-fold1-xlm-roberta-large-best.ipynb">link for the 1st fold training</a> <ul><li>5-Fold XLM_RoBERTa large with the last layer <code>CLS</code> token</li> <li>Sequence length: 192</li> <li>Batch size: 128</li> <li>Epochs: 4</li> <li>Learning rate: 1e-5</li> <li>Training schedule: Exponential decay with 1 epoch warm-up</li> <li>Losses: BinaryCrossentropy on the labels cast to <code>int</code></li></ul></li> </ul> <p>Trained the model 4 epochs on the train set then 1 epoch on the validation set, after that 2 epochs on the test set with pseudo-labels.</p> <ul> <li>2nd model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Model%20backlog/Train/136-jigsaw-fold1-xlm-roberta-ratio-1-8-sample-drop.ipynb">link for the 1st fold training</a> <ul><li>5-Fold XLM_RoBERTa large with the last layer <code>AVG</code> and<code>MAX</code> pooled concatenated then fed to 8 multi-sample dropout</li> <li>Sequence length: 192</li> <li>Batch size: 128</li> <li>Epochs: 3</li> <li>Learning rate: 1e-5</li> <li>Training schedule: Exponential decay with 10% of steps warm-up</li> <li>Losses: BinaryCrossentropy on the labels cast to <code>int</code></li></ul></li> </ul> <p>Trained the model 3 epochs on the train set then 2 epochs on the validation set, after that 2 epochs on the test set with pseudo-labels.</p> <h3>Datasets</h3> <ul> <li>1st model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Datasets/jigsaw-data-split-roberta-192-ratio-2-upper.ipynb">dataset creation</a> 1:2 toxic to non-toxic samples, 400830 total samples.</li> <li>2nd model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Datasets/jigsaw-data-split-roberta-192-ratio-1-clean-polish.ipynb">dataset creation</a> 1:1 toxic to non-toxic samples, 267220 total samples.</li> </ul> <p>Both models used a similar dataset, upper case text, just a sample of the negative data, data cleaning was just removal of <code>numbers</code>, <code>#hash-tags</code>, <code>@mentios</code>, <code>links</code>, and <code>multiple white spaces</code>. Tokenizer was <code>AutoTokenizer.from_pretrained(</code>jplu/tf-xlm-roberta-large', lowercase=False)` like many more.</p> <h3>Inference</h3> <p>For inference, I predicted the <code>head</code>, <code>tail</code>, and a mix of both <code>(50% head &amp;amp; 50%tail)</code> sentence tokens.</p>
Tweet Sentiment Extraction
18th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners and thanks to Kaggle for the competition. </p> <h3>Summary of my solution</h3> <ul> <li><strong>Model:</strong> RoBERTa base as the only model, my score is single 5 fold ensamble (Abhishek folds)</li> <li><strong>Data cleaning:</strong> Recovered the original labels with "magic" being inclusive with words</li> <li><strong>Preprocessing:</strong> Replaced incorrect words, repeating characters, urls, and "..n" -&gt; ". . . n" <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/150840#847641">explained</a></li> <li><strong>Label smoothing:</strong> 90% on the text tokens and 10 % on padding and the first four tokens, also did 0.90%/n for each occurrence of the label</li> <li><strong>Dropout:</strong> Changed all dropout to Multi-Sample Dropout from <a href="https://arxiv.org/pdf/1905.09788.pdf">paper</a> and <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840">Google QUEST Q&amp;A 1st place</a></li> <li><strong>Input Shift:</strong> Shifting the input tokens around the padding randomly while training</li> <li><strong>Postprocessing:</strong> Rather simple "magic'" script to recover the original labels (will post kernel)</li> <li><strong>Improved Find All:</strong> There are many bugs introduced by just naively getting the label index</li> </ul> <h3>What didn't work (excluding most things)</h3> <ul> <li><strong>Pseudo Labeling:</strong> The labels were to easy for the model and it just resulted in overfitting </li> <li><strong>Different learning rates:</strong> Tried different rate for the head, for each layer etc. Nothing worked </li> <li><strong>Language Model pretraining:</strong> <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/146262">Discussed in detail here</a></li> <li><strong>Different models:</strong> Tried Roberta Large, Bert, Electra and BERTweet, they all did much worse </li> <li><strong>Adversarial Training:</strong> <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/143764">Discussed here</a></li> <li><strong>Under/Oversample:</strong> Tried various configurations such as constant amount of each sentiment, removing short tweets, under-sampling based on sentiment etc. No good results</li> </ul>
Plant Pathology 2020 - FGVC7
[placeholder] (8th->5th) place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everyone. Many thanks to Kaggle and the organizers for holding this amazing competition. And special thanks to my teammates. <a href="/aziz69">@aziz69</a> <a href="/datadote">@datadote</a></p> <p>We were concerned about being overfitting. I don't know if it worked out as intended, but I would like to share a solution anyway. :)</p> <h2>Agumentation</h2> <ul> <li>VFlip, HFlip, Rotate, RandomResizedCrop</li> </ul> <h2>Backbone</h2> <ul> <li>resnet50, densenet121, inceptionresnetV2, efficientnet-b7 with noisy-studient EfficientNet worked well for multiple diseases compared to other models relatively.</li> </ul> <h2>loss</h2> <ul> <li>Categorical Cross Entropy + Focal Loss</li> </ul> <h2>Image sizes</h2> <ul> <li>812, 1024 (The score improved when the image size was 800 ~.)</li> </ul> <h2>Other</h2> <ul> <li>Pseudo Labeling</li> <li>SWA</li> <li>Label Smoothing (Improved resnet)</li> <li>TTA (Improved effcientnet / incepresnet / densenet )</li> <li>Oversampling (We tried to fit the data balance.)</li> </ul> <h2>Things that didn't help us.</h2> <ul> <li>Cutmix</li> <li>Mix Up</li> <li>GridMask</li> <li>Blockout</li> <li>etc.</li> </ul> <p>Actually, I did a lot of experiments, but I couldn't write them all down. (Too many resources were spent creating images with CycleGAN.)</p> <p>And I share a failed experiment. I wanted to solve the imbalance problem. I used additional data to generate multiple diseases data.</p> <p>[PlantPathology Apple Dataset] <a href="https://www.kaggle.com/piantic/plantpathology-apple-dataset">https://www.kaggle.com/piantic/plantpathology-apple-dataset</a></p> <p>Finally, I made 224, 512, 1024 sized generated images. The 512 and 1024 images didn't seem to be usable even with my bad eyes. I picked out some useful ones, and there were only a few hundred images. And When I used it for training, but the score was rather low, so I didn't use it.</p> <p>I share some generated images.</p> <p>Anyway, It was the first Kaggle competition working hard on. I learned a lot and it was fun. Thanks! :)</p> <p>p.s. 1. "Focus on your CV." Don't trust your LB score too much.</p> <p>p.s. 2. Perhaps our team members will post clean code and will update it soon. [efficientnetb7-tpu-tta] <a href="https://www.kaggle.com/aziz69/efficientnetb7-tpu-tta">https://www.kaggle.com/aziz69/efficientnetb7-tpu-tta</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F8363a5bb94628521301011aa18f389e3%2Fmd_gen_1024_21.jpg?generation=1590540098980549&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2Fb0ccc2d7410b36c1726cbc23f3c69795%2Fmd_gen_512_164.jpg?generation=1590540070826273&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F386104d87325450519a506bbb5379b59%2Fmd_gen_512_166.jpg?generation=1590540057196169&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F37a64707be15ebc7a8077786b307f6e5%2Fmd_gen_512_5.jpg?generation=1590540038769512&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F317c4ff5d05d784ce5a1b31acbfaaeea%2Fmd_gen_1024_32.jpg?generation=1590540311065011&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F330c70a8fdbedb1835dc412c19a76469%2Fmd_gen_512_137.jpg?generation=1590595682856445&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F6de8e5b3934c954435e5f78ae4ff0bfa%2Fupload.png?generation=1590595777706473&amp;alt=media" alt=""></p>
University of Liverpool - Ion Switching
5th Place Solution with Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone, Thanks to Kaggle team, organizers, Kaggle community for their contributions and my brilliant teammates <a href="https://www.kaggle.com/titericz">Giba</a>, <a href="https://www.kaggle.com/aerdem4">Ahmet</a>, <a href="https://www.kaggle.com/skgone123">Skgone </a> and <a href="https://www.kaggle.com/lihuajing">Alex </a>for their efforts. It was real fun to work with them.</p> <p>I shared a post about our journey in this competition with some technical details previously; <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153688">https://www.kaggle.com/c/liverpool-ion-switching/discussion/153688</a></p> <p>This time, I will briefly summarize our final pipeline and share the code for each step.</p> <h3><strong>1- Clean Dataset Creation</strong></h3> <p>This is the first and most important step in our progress. Before <a href="https://www.kaggle.com/titericz">Giba </a> removed drift and other kind of noises by himself, we couldn't have reliable CV with Chris' data. other teams may set up a reliable CV on Chris' dataset but it didn't work for us after some point. (thanks to <a href="https://www.kaggle.com/cdeotte">Chris</a> for his generosity in this competition). It's a bit hard to prevent overfitting on a synthetic and noisy dataset, especially when you are using multiple OOFs for stacking. <a href="https://www.kaggle.com/titericz">Giba</a> shared his perfect work with all explanations, please see his notebook for more details; <a href="https://www.kaggle.com/titericz/remove-trends-giba-explained">https://www.kaggle.com/titericz/remove-trends-giba-explained</a></p> <h3><strong>2- Data Augmentation</strong></h3> <p>We found it useful to augment a new 10-channel batch by summing 5th and 8th batches because 10-channel batches are simply the sum of two 5-channel batches. To be more specific, it's sum of 2 different (5 HMM Bs + pink noise + white noise).</p> <p>``` train['group'] = np.arange(train.shape[0])//500_000 aug_df = train[train["group"] == 5].copy() aug_df["group"] = 10</p> <p>for col in ["signal", "open_channels"]: aug_df[col] += train[train["group"] == 8][col].values</p> <p>train = train.append(aug_df, sort=False).reset_index(drop=True)</p> <p>``` <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F612360%2F7381c147dcb571e8290ba8b63d456a8e%2Fpublic_aug.png?generation=1590590911570595&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F612360%2F92a489aa6d2c2e6fdb7bdf7b57256896%2Fprivate_aug.png?generation=1590591833151679&amp;alt=media" alt=""></p> <h3><strong>3- Validation Scheme and OOF Creation</strong></h3> <p>We used the same validation for all models including the ones for OOF creation, which is Group-5Fold over 4k sequences. This validation works fine with Giba's clean dataset. Before we don't trust this validation scheme, we were using 2-fold CV to test different things based on these groups divided equally in terms of batch types; ``` groups = [[0, 2, 3, 4, 5], [1, 6, 7, 8, 9]]</p> <p>``` We tried 3 LGBs, 1 RFC, 1 MLP, 1 BiGRU, 1 KNN, 1 SVM for stacking. We used the same validation scheme and applied the same data augmentation above to create OOFs. Only 3 models' OOFs were added to the wavenet (click on model names to reach their notebooks); - <a href="https://www.kaggle.com/meminozturk/into-the-wild-rfc-classification">Random Forest Classifier,</a> similar to <a href="https://www.kaggle.com/sggpls">Sergey's</a> public kernel (thanks to Sergey for his contributions). We applied it on our own dataset and calculated lead/lag features grouping by 10-second batches to avoid leakage from other batches for both train and test. </p> <ul> <li><p><a href="https://www.kaggle.com/meminozturk/into-the-wild-lgb-regression">Lightgbm Regression,</a> simply the Lightgbm Regression version of the RFC model with same lead/lag features. <a href="https://www.kaggle.com/lihuajing">Alex</a> had created this model before we teamed up. I tried hyperparameter optimization and increased its CV by 0.01, however, wavenet worked better with initial paramaters;</p></li> <li><p><a href="https://www.kaggle.com/meminozturk/into-the-wild-mlp-regression">MLP Regression,</a> it's the MLP part of <a href="https://www.kaggle.com/martxelo">Marcelo's </a> kernel with some adjustments;</p></li> </ul> <h3><strong>4- Wavenet Model</strong></h3> <strong>(Private Score: 0.94559 - Public Score: 0.94673)</strong> <p>Our main model is wavenet like most teams and better version of <a href="https://www.kaggle.com/nxrprime">Trigram's </a> great kernel with some more tricks; -Own dataset and OOFs -No other signal-related features -Simpler but effective Wavenet (tuned by <a href="https://www.kaggle.com/aerdem4">Ahmet</a>) <code> inp = Input(shape = (shape_)) <br> x = wavenet_block(inp, 16, 3, 8) x = wavenet_block(x, 32, 3, 12) x = wavenet_block(x, 64, 3, 4) x = wavenet_block(x, 128, 3, 1) out = Dense(11, activation = 'softmax', name = 'out')(x) <br> model = models.Model(inputs = inp, outputs = out) opt = Adam(lr = LR) model.compile(loss = losses.CategoricalCrossentropy(), optimizer = opt, metrics = ['accuracy']) </code> -Simpler LR scheduler -Used Best weights based on F1 Macro for prediction -Test time flip augmentation added -Train time flip augmentation used as in the original kernel -Sliding Window prediction (implemented by <a href="https://www.kaggle.com/aerdem4">Ahmet</a>) <code> def sliding_predict(model, x): pred = np.zeros((x.shape[0], 100_000, 11)) for begin in range(0, 100_000, SLIDE): end = begin + GROUP_BATCH_SIZE pred[:, begin:end, :] += model.predict(x[:, begin:end, :]) return pred </code></p> <p>It scored 0.04559 which is better than our current private score without any blending, <a href="https://www.kaggle.com/meminozturk/into-the-wild-wavenet/">which is available here</a></p> <h3><strong>5-XGB Classifier for Submission</strong></h3> <strong>(Private Score: 0.94555 - Public Score: 0.94686)</strong> <p>Ahmet implemented an XGB Classifier to tune predictions from Wavenet. It's simply using signal, wavenet predictions and noise as features; ``` train['noise'] = train['signal'].values - train['wavenet_prediction'].values test['noise'] = test['signal'].values - test['wavenet_prediction'].values</p> <p>``` This step gave us a nice boost on public LB but, results are almost the same on private leaderboard without XGB. On the other hand, our best private submission was the result of this XGB model with a score of 0.94568 on private leaderbord, <a href="https://www.kaggle.com/meminozturk/into-the-wild-xgb-submission">which is available here</a></p> <p>I hope you find it useful what we've done, which is open to play with.</p> <p>Hope to see you in the upcoming competitions, Emin</p>
Tweet Sentiment Extraction
12th place- predict joint probabilities for start & end indices
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I want to thank <a href="/abhishek">@abhishek</a> for his great <a href="https://www.kaggle.com/abhishek/roberta-inference-5-folds">kernel </a>. I learnt a lot from this work.</p> <p>List of things that gave significant boost: 1. to use roberta fine-tuned on squad2 dataset (+ <strong>~0.003</strong> in comparison with vanilla roberta) 2. to select not just the most likely starting and final index, but smarter approach (+ <strong>~0.003</strong>) 3. of course ensemble works, in my case it was an ensemble of 2 roberta-based models (+ <strong>~0.004</strong>) 4. I also did post processing, something close to approach described <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159499">here</a> (+ <strong>~0.008</strong>) </p> <p>I am not going to describe in details all things that I tried because most of them are already covered well in other discussions. I just want to tell about the 2nd point. </p> <h2>Motivation</h2> <p>Let's imagine that our model returns vector of probabilities for starting index and vector of probabilities for final index. In the picture below vertical green lines represent probability for starting index and red ones for final index. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2Fe3e839fa749e2915e501fd4c703768fe%2Fsituation1-2.png?generation=1592506019218143&amp;alt=media" alt=""></p> <p>In this case we have only 3 possible answers. It is easy to see that likelihoods for 1, 2 and 3 substrings are respectively 0.49, 0.49 and 0.02. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2F9ee5c2c9c3ced00ec533d860a3bb04a7%2Fsituation1-3.png?generation=1592506579363108&amp;alt=media" alt=""></p> <p>And if we select the most likely start and end it will mean that we prefer substring 3. We can calculate expectation of Jaccard score for all 3 cases:</p> <p>substring 1: 0.49 * 1 + 0.49 * 0 + 0.02 * 0.1 = 0.492 substring 2: 0.49 * 0 + 0.49 * 1 + 0.02 * 0.1 = 0.492 substring 3: 0.49 * 0.1 + 0.49 * 0.1 + 0.02 * 1 = 0.118</p> <p>Obviously the 3rd substring is the worst answer and it is better to select 1 or 2 substring. So, if we can find likelihood for each combination of start index and end index then we can maximize expectation of Jaccard score. The problem is that we cannot always find this likelihood for each pair of starting and final indices. For example, consider the following case:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2F4ca0cfae6756861f7ac4f9bd7fe37472%2Fsituation2.png?generation=1592507712450034&amp;alt=media" alt=""></p> <p>There are 4 possible answers here and we cannot estimate likelihoods correctly for them.</p> <h2>Proposed solution</h2> <p>I modified top level of my model in such way to make it return pairwise logits instead of separate logits for starting and final indices.</p> <p>``` class TweetModel(transformers.BertPreTrainedModel): def <strong>init</strong>(self, conf): super(TweetModel, self).<strong>init</strong>(conf) self.base_bert = BASE_MODEL.from_pretrained(MODEL_PATH, config=conf) self.drop_out = nn.Dropout(0.1) self.linear_start = nn.Linear(HIDDEN_DIM * 2, INTERNAL_EMBEDDING) self.linear_end = nn.Linear(HIDDEN_DIM * 2, INTERNAL_EMBEDDING) torch.nn.init.normal_(self.linear_start.weight, std=0.02) torch.nn.init.normal_(self.linear_end.weight, std=0.02)</p> <pre><code>def forward(self, ids, mask, token_type_ids): _, _, out = self.base_bert( ids, attention_mask=mask, token_type_ids=token_type_ids, ) out = torch.cat((out[-1], out[-2]), dim=-1) embeddings_start = self.linear_start(self.drop_out(out)) embeddings_end = self.linear_end(self.drop_out(out)) logits = torch.einsum("ijk,idk-&amp;gt;ijd", (embeddings_start, embeddings_end)) return logits </code></pre> <p>```</p> <p>Now output of the model can be visualized as a matrix. Each pixel is responsible for one possible substring.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2F25517635b115af2cb1327bbe583d4a9b%2Foutput.png?generation=1592508993879675&amp;alt=media" alt=""></p> <p>Profit! Now we do not need to estimate likelihood, our model does it instead of us and we can simply optimize mathematical expectation of Jacсard score.</p>
M5 Forecasting - Accuracy
A few thoughts - 24th Place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Like a few others I was one of those that benefited from the leaderboard shakeup. For reference I didn’t use any multipliers. My solution I think was relatively simply, being a combination of light gbm and a simple LSTM. The biggest benefit I found was in getting local WRMSSE working which allowed me to do a better job tuning and also where I found the biggest benefit in feature selection especially given the large number of features I’d engineered throughout the competition. </p> <p>I think one of the challenges with this competition was the evaluation metric. While I did work with a custom objective function during training a lot of different approaches I tried often didn’t have the outcomes I expected when evaluating. </p> <p>In the end I found my cross validation to be fairly consistent by the end and was close to the score I got on private LB. Incidentally my model would still perform poorly on the public leaderboard against some scores on there.</p> <p>It would also be interesting to understand if the hosts actually picked this particular period for scoring on purpose or they were just arbitrary. Was there some nuances in this period that other forecasting methods have struggled with and that’s why they were selected?</p>
Jigsaw Multilingual Toxic Comment Classification
34th Place Solution - Layerwise and Sampling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank the organizers for this amazing competition and Kaggle for providing v3 TPU where people without hardwares like myself were able to compete without much disadvantage. I also want to congratulate the winners. This was my first Kaggle competition and I learned a lot about SOTA NLP models/techniques through the process. It was tough at first, but it was exciting to see my ideas work and move up the leaderboard. </p> <h2><strong>Summary</strong></h2> <ol> <li>ensemble of XLM-R models</li> <li>layer-wise learning rate/weight decay </li> <li>pseudo-labelling</li> <li>sampling strategy </li> </ol> <h2><strong>Ensemble of XLM-R models</strong></h2> <p>For all of my models, I trained on the training data for 2 epochs using [CLS] token without any extra layers. I tried different heads on top but I saw a drop in the score for all of them. I believe this is because the last-layer of a pretrained deep NN is too specific for the previous task (MLM objective in our case). Adding more layers on top just made the training more unstable. </p> <p>I further finetuned on the validation set for 2 epochs using 5-fold then averaged the oof predictions. This 5-fold CV AUC score was a good indicator of the public LB score. I ensembled only XLM-R models because other multilingual models (like XLM-17, m-BERT) were not very useful. I also trained each model 3 times and averaged the predictions for stability. </p> <p>Mainly, I trained 1. XLM-R-large english only model (~0.9356 on public LB) 2. XLM-R-large translated only model - pretrained more with MLM using test data for 4 epochs (~0.9471 on public LB) 3. XLM-R-large translated only model with last 50 (~0.9449 on public LB)</p> <h2><strong>Layer-Wise Learning Rate/Weight Decay</strong></h2> <p>I tried two different layer-wise strategies: layer-wise learning rate and layer-wise weight decay. I saw that layer-wise learning rate was effective in finetuning BERT in this article <a href="https://arxiv.org/abs/1905.05583">https://arxiv.org/abs/1905.05583</a> and found it to be very useful for this competition as well. From this result, I also tried out a variant of this layerwise training strategy where I regularized the model more in the lower layers. I found that this worked much better than the layer-wise learning rate (about ~0.008 boost in public LB). It works just like the layer-wise learning rate strategy but instead of multiplying the learning rate by the layerwise coefficient alpha^(total_num_layers-layer_number) (where layer_number = 0 for the first layer), you multiply the weight decay rate by the coefficient alpha^layer_number. Thus, you are regularizing the lower layers more than the upper layers. This made sense because a wide exploration of the parameter space was needed for the upper layers in order to adapt to the new objective while the lower layers didnt need to change much. I used 0.01 for the weight decay and 0.99 for the layer-wise coefficient (alpha). Layer-wise learning rate strategy could actually be better but it was very sensitive to hyperparameter choices, so with limited TPU time I worked with layer-wise weight decay. </p> <h2><strong>Pseudo-labelling</strong></h2> <p>I used my best model's predictions on the test set as the pseudo-labels and included in the training set. Even using pseudo-labels of a relatively worse model (scoring ~0.935) was effective. It was interesting to observe the instability of the XLM-R-large model where switching only the pseudo-labels had resulted in diverging of the model. I had to lower the learning rate from 1e-5 to 6e-6 in order to converge.</p> <h2><strong>Sampling-strategy</strong></h2> <p>I downsampled training data to 1:1 for all models. For the XLM-R translated only models, I mixed yandex and google translation and saw a little boost in local CV and public LB. I also made sure that when sampling for non-toxic there were no duplicates across different languages for non-toxic.</p> <h2><strong>Other Ideas that Didn't Work</strong></h2> <ol> <li>text pre-processing</li> <li>label-smoothing</li> <li>extending the test dataset by translating back from english</li> <li>making multiple predictions - predicting at val finetune epoch 1 + val finetune epoch 2 and combining</li> <li>meta-learning </li> <li>embeddings - I tried using bilingual Attract-Repel embeddings</li> </ol> <p>Lastly, I learned that strictly planning your ideas by ranking them by their importance was really crucial especially when working with only 30 hours of TPU time. I found myself spending majority of my TPU time trying to tweak the working model to make it better rather than trying more of other novel ideas: striking a good balance between Exploitation and Exploration is the key! It was quite sad to see one of my ideas (making expert models using monolingual transformer models) in the middle of my untried idea list in the solution for the 1st place solution, but I have learned my lesson :)</p> <p>I want to thank the authors of these wonderful kernels for their work. I have learned a lot from them. <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta</a> <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> <a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta</a></p>
Tweet Sentiment Extraction
2nd place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we'd like to thank the Kaggle team for holding this interesting competition. And, I want to thank my teammates( <a href="/yuooka">@yuooka</a>, <a href="/futureboykid">@futureboykid</a>) for their hard work.</p> <p>This post is an overview of our solution. Our solution consists of four major parts(preprocessing, base-model training, reranking-model training, postprocessing).</p> <h1>Preprocessing</h1> <p>We simply apply the postprocessing method to preprocessing refer to <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/154415#864905">this post</a>. Using this method, we can train models with less noise.</p> <p>ex) selected_text "e fun" -&gt; "fun"</p> <h1>Base-model training</h1> <p>Our training method is fundamentally the same as <a href="https://www.kaggle.com/abhishek/roberta-inference-5-folds">this great kernel</a>. </p> <h2>Model Architecture</h2> <p>We have tried so many model architectures... However, we finally got to use these two RoBERTa pretrained on SQuAD2.</p> <ol> <li><p>Using the 11th(base) or 23th(large) hidden layer as output. (<a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/148417#833085">This post</a>)</p></li> <li><p>Using a trainable vector and apply softmax over it &amp; multi dropout. (<a href="https://github.com/oleg-yaroshevskiy/quest_qa_labeling/blob/master/step5_model3_roberta_code/model.py">Google Quest 1st</a>) (We will refer to it hereafter as MDO.)</p></li> </ol> <h2>Loss</h2> <p>We tried a lot, but finally, choose simple CrossEntropyLoss. ``` class CROSS_ENTROPY: def <strong>init</strong>(self): self.CELoss = nn.CrossEntropyLoss(reduction='none')</p> <pre><code>def __call__(self, pred, target): s_pre, e_pre = pred.split(1, dim=-1) s_tar, e_tar = target.split(1, dim=-1) s_loss = self.CELoss(s_pre.squeeze(-1), s_tar.squeeze(-1)) e_loss = self.CELoss(e_pre.squeeze(-1), e_tar.squeeze(-1)) loss = (s_loss + e_loss) / 2 return loss </code></pre> <p>```</p> <h2>Training Strategy</h2> <p>We suffered from learning instability. There are two main ideas to get a stable result.</p> <h3>SentimentSampler</h3> <p>Most of the public kernel using stratified-KFold by sentiment. But, we decided that wasn't enough. Seeing <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/138520">this post</a>, there is a huge difference among the sentiment.</p> <p>So, we adopt SentimentSampler to equalize the imbalance within the batch. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1497263%2F4a7afe89246c305a5d8e1f648ead65ef%2F2020-06-17%2010.32.33.png?generation=1592357573339723&amp;alt=media" alt=""></p> <h3>SWA</h3> <p>We found the instability of the validation score while training (~±0.001 varies with just 1 iteration!)</p> <p>So, we adopt <a href="https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/">SWA</a> to stable the result.</p> <p>This makes it possible to get stable results with 10~50 iteration monitoring, so that we could get results not overfitted to validation set (additionally validation time efficient.)</p> <h2>Ensemble</h2> <p>Using two different seeds (seed averaging)</p> <p>RoBERTa Base 11th layer + RoBERTa Large 23th layer + RoBERTa Base MDO + RoBERTa Large MDO</p> <p>4 models * 2 seeds = Total 8 models </p> <h1>Reranking-model training (public +0.002, private +0.003~5)</h1> <h2>Approach</h2> <p>We believe this part is the unique and the most different from other teams. This approach is based on the following idea. "Creating multi candidates and choose best one."</p> <p>Step1 Calculating top n score based on the start &amp; end value(apply softmax) from base-model.</p> <p>Step2 Creating candidates based on the Step1 score. Candidates include selected_text &amp; Jaccard_score &amp; Step1_score.</p> <p>Step3 Training RoBERTa Base model. (Target is candidate's jaccard_score)</p> <p>Step4 Sorting candidates by Step3 predicted value + Step1 score * 0.5. And choose the best one as the final answer.</p> <h2>SequenceBucketing</h2> <p>We have built over 50 models until above (base-models &amp; reranking models). In order to finish inference within the limited time, we have decided to chose SequenceBucketing even some models had not been trained with it.</p> <p>In this case, each batch contains the same text and little different candidate. </p> <p>Therefore, inference time speed up x2 and surprisingly got a better result than not using. We need to find out why...</p> <h1>Postprocessing (public +0.01, private +0.012)</h1> <p>Just like any other team found magic, we focused on the extra space. ``` def pp(filtered_output, real_tweet): filtered_output = ' '.join(filtered_output.split()) if len(real_tweet.split()) &lt; 2: filtered_output = real_tweet else: if len(filtered_output.split()) == 1: if filtered_output.endswith(".."): if real_tweet.startswith(" "): st = real_tweet.find(filtered_output) fl = real_tweet.find(" ") if fl != -1 and fl &lt; st: filtered_output = re.sub(r'(.)\1{2,}', '', filtered_output) else: filtered_output = re.sub(r'(.)\1{2,}', '.', filtered_output) else: st = real_tweet.find(filtered_output) fl = real_tweet.find(" ") if fl != -1 and fl &lt; st: filtered_output = re.sub(r'(.)\1{2,}', '.', filtered_output) else: filtered_output = re.sub(r'(.)\1{2,}', '..', filtered_output) return filtered_output if filtered_output.endswith('!!'): if real_tweet.startswith(" "): st = real_tweet.find(filtered_output) fl = real_tweet.find(" ") if fl != -1 and fl &lt; st: filtered_output = re.sub(r'(!)\1{2,}', '', filtered_output) else: filtered_output = re.sub(r'(!)\1{2,}', '!', filtered_output) else: st = real_tweet.find(filtered_output) fl = real_tweet.find(" ") if fl != -1 and fl &lt; st: filtered_output = re.sub(r'(!)\1{2,}', '!', filtered_output) else: filtered_output = re.sub(r'(!)\1{2,}', '!!', filtered_output) return filtered_output</p> <pre><code> if real_tweet.startswith(" "): filtered_output = filtered_output.strip() text_annotetor = ' '.join(real_tweet.split()) start = text_annotetor.find(filtered_output) end = start + len(filtered_output) start -= 0 end += 2 flag = real_tweet.find(" ") if flag &lt; start: filtered_output = real_tweet[start:end] if " " in real_tweet and not real_tweet.startswith(" "): filtered_output = filtered_output.strip() text_annotetor = re.sub(" {2,}", " ", real_tweet) start = text_annotetor.find(filtered_output) end = start + len(filtered_output) start -= 0 end += 2 flag = real_tweet.find(" ") if flag &lt; start: filtered_output = real_tweet[start:end] return filtered_output </code></pre> <p>```</p> <h3>This post is a brief version of our solution.</h3> <h3>Please looking forward to the detail explanation from my teammates :)</h3> <p>Reranking-model <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159315">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159315</a> Postprocessing <a href="https://www.kaggle.com/futureboykid/2nd-place-post-processing">https://www.kaggle.com/futureboykid/2nd-place-post-processing</a></p>
Jigsaw Multilingual Toxic Comment Classification
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all we are grateful to Jigsaw for organizing such an interesting competition, and flawless execution in terms of data quality, train / validation/ test split etc. Also very thankful to Kaggle for providing the free TPUs which made quick experimentation possible. This is the first time I used just Kaggle Notebook (kernels) for training / inference and was impressed with how well they work most of the time, with a slight exception of lacking full blown IDE functionality, love everything about the Kernels. </p> <p>Last, but not the least, I want to thank my amazing teammates - @brightertiger (Ujjwal), @soloway (Igor) and@drpatrickchan (Dr Patrick) - who made our LB position possible and were super fun to work with. Our best submission was made in the last 15 min of the competition, so this was a relentless team, which never stopped improving. We were constantly looking and re-evaluating new angles to improve our solution. Although, I am writing the solution description, but I speak on everyone’s behalf here. </p> <p>Following were the major parts of our best submission:</p> <p><strong>RoBERTa XLM pre-trained</strong>: Like we saw with all the publicly shared kernels, RoBERTa XLM was the workhorse, which just delivered great performance with minimum effort / training. Used a lot of things discussed in the forum like translated data, open subtitles data along with averaging across multiple folds. Typically about ~200K or so observations were enough to train a single model, so there was a lot of scope to do multi-fold averaging given the huge data available at hand. A big thanks to @shonenkov and @xhlulu for their excellent kernels. </p> <p><strong>Language Specific Pre-trained Bert Models</strong>: Used language specific pre-trained Bert models for all the 6 test languages. A given model with a given capacity is any day more powerful for a single language vs multiple languages. The only challenge was to merge the probabilities coming out of different models (which could have different distributions) into a single ranking which could be especially problematic for a ROC evaluation metric (Even if you got the rank ordering perfectly right within each language you can mess things up when combining across languages). Our teammate Igor could make it work magically. Big thanks to @shonenkov for his great <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">kernel</a> that is used for training these Bert Models.</p> <p><strong>RoBERTa XLM MLM</strong>: Borrowed the concept of training the model on domain specific data using this excellent <a href="https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm">kernel</a> by @riblidezso. While the performance from these models were similar to our existing XLM-Roberta models, they worked well in the blend. We added more data to the MLM step by using translations of training data, validation data and test dataset. This gave us a couple versions of the pre-trained model that we averaged in the blend. </p> <p><strong>Post Processing</strong>: We observed that around 5-10% of the comments looked like automated / template based messages. E.g - Look at Test ID 39482. What’s really happening here is - the username / page title / attachment name is quoted by the bot here, which may contain profanities. But most likely due to the way data labels would have been generated, these are likely marked as non-toxic - our models unfortunately get confused by this. We applied regular expressions, clustering and other heuristics to adjust scores coming directly out of the models for these comments. This gave us close to additional 0.001 on LB</p> <p>Other things which impacted results a tiny bit: <strong>TTA</strong>: Some of our models did test time augmentation - where we did a weighted average of the prediction over foreign and english language. <br> <strong>Label smoothing</strong>: Was used in most of our models including the mono lang models.</p> <p>A weighted blend of the components # 1-3 put together and post processing described in # 4 gave us our 0.9523 on Public and 0.9509 on Private LB. </p>
M5 Forecasting - Accuracy
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi guys, I would like to share my solution with you, </p> <p>but first I would like to thank Kaggle, the hosts, and all participants, especially @kneroma and @kailex for their kernel contributions. In addition, thanks to the whole discussion about the magic multiplier led by the @kyakovlev kernels.</p> <p><strong>1) The features:</strong></p> <p>A) From the calendar: I created features from the remaining days for the event. Each event would be a column, and the values in this column refer to the "strength" of the proximity of the event limited to 30. Still in this matter, a new variable that would be the sum of all these "forces" was created, if this column had a high value, it would indicate that they would have one or more events nearby. Also used some variables to indicate seasonality. For example, indicating how near the day is to the end of the month. Week of the year, day of the week...</p> <p>B) Prices: I decided to remove the current price of the items in the model. Instead, I calculated the percentage difference between the price of the week and the price of previous weeks, thus creating some columns related to that price difference. I figured that a fall or rise in price that week was going to make more sense for a more generalized prediction.</p> <p>C) Historical series Here, thanks to the @kkiller and @kxx kernels with the variables ['lag_7', 'lag_28', 'rmean_7_7', 'rmean_28_7', 'rmean_7_28', 'rmean_28_28']. </p> <p>Also inserted some other variables like the item id and the store id as cat variables.</p> <p><strong>2) The model</strong></p> <p>I used an LGBM model with early stopping, and with Poisson objective, for each department, resulting in a total of 7 models. It made it easier for me not to burst the memory and be able to use more days from the past. I also used the predicted value to predict the next (eg lag_7)</p> <p><strong>3) Post-processing magic multipliers.</strong></p> <p>A magic multiplier of ~1.03 really improved the result, but I was not confident on that.</p> <p>So I went to analyze how the true values differed from the predicted values, for each store and department. And we have graphs like these:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2657722%2Fb7ab4696d212a626e80e415466d7f736%2F1.PNG?generation=1593826682424401&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2657722%2Ff6b446ac9535247f848a98874cf033ee%2F2.PNG?generation=1593826699867532&amp;alt=media" alt=""></p> <p>Some stores/departments showed the prediction "constantly below" (of course with a lot of noise) of the true value. And other stores/departments showed the prediction "constantly above" the true value. I then decided to create an average correction factor for each store/department, based on the last week (validation). And we have a matrix like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2657722%2F1f27c2fbb90bba19b7aa3b734a5b46c8%2F3.PNG?generation=1593826632224426&amp;alt=media" alt=""></p> <p>...</p> <p>That is, ~ 0.92 represents a specific factor for FOODS_1/CA_1, because the true value was, on average, 92% of the predicted value to the validation set.</p> <p>In this way, I imagined that the historical correction factors for the validation would behave similarly from the historical to the evaluation.</p> <p>In summary for this item, instead of using a magic multiplier, I used a magic multiplier for each store/department. Here, I was not afraid of overfitting the result. It could be much worse, I could overfit a lot more by creating a factor for each item, but it would be too much. So, it was a compromise between creating just one magic multiplier for the entire result and one magic multiplier for each item.</p> <p><strong>4) A simple diagram:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2657722%2F232d7d80e729f5e374499e74470aa883%2FUntitled%20Diagram.png?generation=1593830988541757&amp;alt=media" alt=""></p> <p>The solution in the public was far from winning, but the private score remained close to the public one.</p> <p>Finally, I hope to be able to contribute more to the discussions of the next challenges. </p> <p>I am very grateful to everyone for the great learning I was able to absorb in this competition, regardless of the result.</p> <p>Thank you again</p>
Tweet Sentiment Extraction
bronze + magic = gold ~ 8th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Kaggle for holding this competition and congratulations to all the winners!</p> <p>Well, I notice this competition right after it started but didn't really started working on it until the last month. But one month actually proves to be too long for me because I quickly ran out ideas in 10 days. And what's making it worse and frustrating is that the LB is seriously unstable. I am really on the verge of giving up until someone posted on the forum and said that there's a post-processing magic. And lucky for me, after grinding really hard for one day, I found out the magic and perfected it in 2 days. But then, for the last 2 weeks, I still kind of gave up and started playing pc&amp;video games instead. Because I projected that by the end of this competition, gold zone will be flooded with 0.73. And also, there will be severe shake-down and I just don't have any great and robust ideas. To really secure a gold spot you'll basically need to get to public 0.74 which I don't think is possible for me. Well, in the end, the result turned out to be not too bad for me. Get my first ever solo gold. If I try too hard and over-fit LB too much, who knows what will happen. :)</p> <p>Magic I'll use the following sample to illustrate the magic: 's awesome' = original answer = answer_ ' He<code>s awesome... Have you worked with him before? He</code>s a good friend.' = original text = text_ 'He<code>s awesome... Have you worked with him before? He</code>s a good friend.' = no extra blank = text <code>'awesome'</code> = true answer = answer The theory is very easy actually, the shift(noise) happens when in the original text, there are extra blanks before the answer's end. Suppose answer=text[s:e], then answer_ =text_ [s:e+2] Code is also very simple actually: <code> def post_processing(answer,text,text_): if len(text_)&amp;gt;0: answer=answer.strip().lower() text=text.lower() p=find_(text_," ") q=find_(text,answer) if text_[0]==" " or 0&amp;lt;=p&amp;lt;=q+len(answer)-2: answer=text_[q:q+len(answer)+2] return answer </code> This is at least 99% true, I think. There are some exceptions of course, because they're originally wrong.</p> <p>Models I basically just ensemble several models with minor difference in architecture and cleaning. Sorry to disappoint you. :) model 1: Similar to those high scoring public kernels. Only difference is that I remove all the noises and errors in the answer. Public LB: 0.716 model 2: Same as model 1 except that I remove the CNN head. Public LB: 0.714 model 3: Same as model 2 except that I complete the noise. For example, in the above sample, I'll change the answer to 'He`s awesome'. Public LB: 0.714 model 4: Do pre-processing by reversing the post-processing magic, then follow the same procedure as model 1: Public LB: 0.719 ensemble: model1+model2+model3+4.5*model4: Public LB: 0.723 + post-processing: Public LB: 0.731 Private LB: 0.729 But, if we take out everything that's related to magic, that means, remove post-preprocessing and model4, how much can I get? Well, I can tell you, model1+model2+model3: Public LB: 0.718. And don't forget about the 0.002 shake-down.(I consider this lucky!) So, seriously, bronze is really one optimistic estimation.</p> <p>I have to put the pre-processing code in this pic, because there are some special symbols in it that makes the whole comment bugged. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F516737%2F1f699a8a36a419617eee2f0dbd9cafdb%2F.png?generation=1592387324189595&amp;alt=media" alt=""></p>
Plant Pathology 2020 - FGVC7
3rd Place Score Model Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was my first competition outside of Titanic and was a fun intro to Kaggle competitions. </p> <p>My submission ended up being fairly simple; it was an average of two ensembles, the first with three averaged models the other with two.</p> <p>For all of the models I trained EfficientNet B7 models with Tensorflow on TPU. I used 768x768 resolution for all runs and slightly varying data augmentations. The augmentations I used were: rotation, shear, zoom, shift, flip left/right, flip up/down, random brightness. I also upsampled “multiple diseases” data in my training set by 5x. Thanks to <a href="https://www.kaggle.com/cdeotte">@ChrisDeotte</a> for some helpful data augmentation guides.</p> <p>Ensemble A: Scores: 3 models, all 0.977 public score, averaged public score: 0.979 Model: EfficientNet B7 Data Balancing: Multiple diseases by 5x Data Augmentation: rotation, shear, zoom, shift, flip left/right, flip up/down, random brightness</p> <p>Ensemble B: Scores: 2 models, both 0.979 public score, averaged public score 0.980 Model: EfficientNet B7 Data Balancing: Multiple diseases by 5x Data Augmentation: &gt;35 degree random rotations, &gt;64 pixel shift, same other augmentations as above</p> <p>Ensemble A + B: Average public score 0.981</p>
iWildCam 2020 - FGVC7
LB #3 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers, Sorry for the late post about our solution. My teammate and I summarize our solution in the following perspectives: <code> - Data - Preprocessing/Augmentation methods - Models - Prediction technique - Class imbalance - Fine-grained features </code></p> <ul> <li><p>Data</p> <ol><li>We used crops of animals for training provided by the official detector results. Both 64x64 crops and 224x224 crops were tried and 224x224 crops provided better score;</li> <li>Some of the original bounding boxes were highly skewed, we tried to extend the skewed bounding boxes to make it quasi-square, and resized them to 224x224;</li></ol></li> <li><p>Preprocessing methods</p> <ol><li>We tried multiple augmentation methods, ranging from traditional physical augmentation to optical techniques. The final augmentation methods employed by us were "horizontal-flip" &amp; " CLAHE/contrast limited adaptive histogram equalization".</li> <li>Augmentations were applied during training on the fly;</li></ol></li> <li><p>Models <code> | Model | Public LB | Private LB | |-------------------------------| | EfficientNet | 0.835 | 0.823 | | ResNet152 | 0.794 | 0.815 | | NTS with ResNet50 | 0.837 | 0.841 | | NTS with ResNext50 | 0.840 | 0.850 | |Final Ensemble Model = 6 NST models + EfficientNet | 0.843 | 0.848 | </code> Another trick we'd like to point out is using <code>GridSearch</code> to search for the optimal weights of each weak learner, using the local validation score as the benchmark.</p></li> <li><p>Prediction technique We find the test images are sorted in the following order: <code> Sequence ID --&amp;gt; Locations --&amp;gt; Time Stamp --&amp;gt; image dimension </code> Therefore, test crops were sorted into clips following the above logic. While prediction, a prediction by clip was adopted naturally. Specifically, for each clip, we average predicted probs of all images to calculate the final label; then the final label was shared by all the images under this clip.</p></li> <li><p>Class imbalance</p> <ol><li>We tried multiple ways to counter class imbalance issue. For example, we used class weights calculated via the effective number pointed out by the paper, "Class-Balanced Loss Based on Effective Number of Samples", which didn't work. We also tried BBN Network but failed to make it run. At last, we still used routine class weights calculated by <code>1 / # of images</code> of each class.</li></ol></li> <li><p>Fine-grained features</p> <ol><li>We used a SOTA network, NTS, which consists of a Navigator, a Teacher and a Scrutinizer. </li></ol></li> <li><p>Wrap up So, that's basically our team's solution to Rank 3 in this competition. We'd like to learn from the rest teams, especially #1 &amp; #2 team. If any team member from these two teams happens to see my post, please leave me a message so that I can drop you an email/Wechat message to connect. </p></li> </ul> <p>Thanks, Steven Yin &amp; Ryan Zheng</p>
Jigsaw Multilingual Toxic Comment Classification
59th Place Solution (And My First Silver Medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Overview</h1> <p>My two best submissions were from a stacking pipeline. I've trained several models on Kaggle and Colab, which I later stacked their predictions using various stacking techniques (not just the usual mean, median, minmax) to achieve higher LB scores. </p> <h2>The Two Best Submissions</h2> <ul> <li>Stacking &gt; 12 models (used mean of the predictions of 12 submission files, from 9 single models and 3 blends) <strong>(Private LB: 0.9469, Public LB: 0.9485)</strong></li> <li>Blend of stacked predictions from above that gave 0.9485 on the public LB with <code>ExtraTreesClassifier</code> predictions trained on the validation data with <a href="https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse/notebook">https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse/notebook</a> <strong>(Private LB: 0.9471, Public LB: 0.9485)</strong></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1738095%2F78d1c6c510639908e3d10b481e1ec8ea%2Fbest_subs.png?generation=1592885422854659&amp;alt=media" alt=""></p> <h1>Models Used For Stacking</h1> <ul> <li>XLM-RoBERTa</li> <li>XLM-RoBERTa Large</li> <li>XLM-RoBERTa Large + MLM training</li> <li>ExtraTreesClassifier</li> </ul> <p>P.S. Might have missed out one or two, will update here should I find any missing.</p> <h1>How Models Were Trained</h1> <ul> <li>For the MLM model (third above), I trained it on Kaggle </li> <li>For the XLM-RoBERTa models, I trained them on Google Colab TPU</li> </ul> <h1>Notebooks With Models Referenced Above</h1> <p>A big thanks to all whom contributed to the following notebooks, you have taught me quite a lot! I'm still relatively new to NLP, and your sharing has made me more comfortable and interested in the domain! :)</p> <ul> <li><a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> </li> <li><a href="https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse">https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse</a></li> <li><a href="https://www.kaggle.com/hamditarek/ensemble">https://www.kaggle.com/hamditarek/ensemble</a></li> <li><a href="https://www.kaggle.com/sai11fkaneko/data-leak">https://www.kaggle.com/sai11fkaneko/data-leak</a></li> <li><a href="https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta</a></li> <li><a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta</a></li> <li><a href="https://www.kaggle.com/aiaiooas/parcor-regularised-classification">https://www.kaggle.com/aiaiooas/parcor-regularised-classification</a></li> <li><a href="https://www.kaggle.com/shahules/fine-tune-xlm-kfold-cv-0-93-lb">https://www.kaggle.com/shahules/fine-tune-xlm-kfold-cv-0-93-lb</a></li> </ul> <h1>Notebooks Used For Submission</h1> <ul> <li><a href="https://www.kaggle.com/khoongweihao/ensemble-ii-the-dark-side-of-stacking">https://www.kaggle.com/khoongweihao/ensemble-ii-the-dark-side-of-stacking</a> <strong>(Mine)</strong></li> <li><a href="https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse">https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse</a> <strong>(Public)</strong></li> </ul> <h1>Ensemble Methods That Did Not Work</h1> <ul> <li>Bi-weighted correlations stacking (something that stemmed from my personal research, which worked on other regression and classification problems and datasets, just not this one sadly. It actually worked at the start, but in the last month of the competition I could not find a suitable configuration due to the lack of submissions. See my notebook above for details)</li> <li>Power-weighted stacking/blend (intention was to penalize the classification probabilities, idea came from the winning solution of the previous Jigsaw competition. Sadly this did not work out too)</li> <li>Distance-weighted stacking (main idea is to give higher weights to predictions with lower distances such as correlation/euclidean distance. This did not work too)</li> </ul>
Jigsaw Multilingual Toxic Comment Classification
1st place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We’d like to start off by thanking Kaggle/Jigsaw for a drama-free competition and also by congratulating the other medallists. </p> <h2>TL;DR</h2> <ol> <li>Ensemble, ensemble, ensemble</li> <li>Pseudo-labelling</li> <li>Bootstrap with multilingual models, refine with monolingual models</li> </ol> <p>I haven't competed on Kaggle since I co-won the 2018 Toxic Comments competition. The state-of-the-art for NLP classification at the time was non-contextual word embeddings (e.g., FastText) and I was curious if I’d make a good showing in a world of Transformers. I'm pleasantly surprised to have done so well with my team-mate @rafiko1. </p> <h2>Our public LB milestones</h2> <ul> <li>Baseline XLM-Roberta model (Public LB: 0.93XX - 31 March)</li> <li>Average ensemble of XLM-R models (0.942X - 3 April)</li> <li>Blending with monolingual Transformer models (0.9510 - 23 April)</li> <li>Team merger - weighted average of our individual best subs (0.9529 - 28 April)</li> <li>Blending with monolingual FastText classifiers (0.9544 - 5 May)</li> <li>Post-processing and misc. optimizations (0.9556 - 13 June)</li> </ul> <p>You’ll note we hit 1st place on the <strong>FINAL</strong> public LB by end April. Put another way, we were waiting for 2 months for the competition to end :)</p> <h2>CV strategy</h2> <p>We initially used a mix of k-fold CV and validation set as hold-out but as we refined our test predictions and used pseudo-labels + validation set for training, the validation metric became noisy to the point where we relied primarily on the public LB score. </p> <h2>Insights</h2> <h3>Ensembling to mitigate Transformer training variability</h3> <p>It’s been noted that the performance of Transformer models is impacted heavily by initialization and data order (<a href="https://arxiv.org/pdf/2002.06305.pdf">https://arxiv.org/pdf/2002.06305.pdf</a>, <a href="https://www.aclweb.org/anthology/2020.trac-1.9.pdf">https://www.aclweb.org/anthology/2020.trac-1.9.pdf</a>). To mitigate that, we emphasized the ensembling and bagging our models. This included temporal self-ensembling. Given the public test set, we went with an iterative blending approach, refining the test set predictions across submissions with a weighted average of the previous best submission and the current model’s predictions. We began with a simple average, and gradually increased the weight of the previous best submission. For the training data, we largely used sub-samples of the translations of the 2018 toxic comments for each model run. </p> <h3>Pseudo-labels (PL)</h3> <p>We observed a performance improvement when we used test-set predictions as training data - the intuition being that it helps models learn the test set distribution. Using all test-set predictions as soft-labels worked better than any other version of pseudo-labelling (e.g., hard labels, confidence thresholded PLs etc.). Towards the end of the competition, we discovered a minor but material boost in LB when we upsampled the PLs. </p> <h3>Multilingual XLM-Roberta models</h3> <p>As with most teams, we began with a vanilla XLM-R model, incorporating translations of the 2018 dataset in the 6 test-set languages as training data. We used a vanilla classification head on the CLS token of the last layer with the Adam optimizer and binary cross entropy loss function, and finetuned the entire model with a low learning rate. Given Transformer models have several hundred million trainable weights put to the relatively simple task of making a binary prediction, we didn’t believe the default architecture required much tweaking to get a good signal through the model. Consequently, we didn't spend too much time on hyper-parameter optimization, architectural tweaks, or preprocessing. </p> <h3>Foreign language monolingual Transformer models</h3> <p>@rafiko1 and I stumbled on this technique independently and prior to forming a team. We were both inspired by the MultiFiT paper (<a href="https://arxiv.org/pdf/1909.04761.pdf">https://arxiv.org/pdf/1909.04761.pdf</a>) - specifically: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1029053%2Fb40975c9453fba6f640836e4fe297746%2FCapture.PNG?generation=1592872625708960&amp;alt=media" alt=""></p> <p>We observed a dramatic performance boost when we used pretrained foreign language monolingual Transformer models from HuggingFace for the test-set languages(e.g., Camembert for french samples, Rubert for russian, BerTurk for turkish, BETO for spanish etc.). </p> <p>We finetuned models for each of the 6 languages - by combining translations of the 2018 Toxic Comments together with pseudo-labels for samples in that specific language (initially from the XLM-R multilingual models), training the corresponding monolingual model, predicting the same samples then blending it back with the “main branch” of all predictions. It was synergistic in that training model A -&gt; training model B -&gt; training model A etc. lead to continual performance improvements. </p> <p>For each model run, we’d reload weight initalizations from the pretrained models to prevent overfitting. In other words, the continuing improvements we saw were being driven by refinements in the pseudo-labels we were providing to the models as training data. </p> <p>For a given monolingual model, predicting only test-set samples in that language worked best. Translating test-set samples in other languages to the model's language and predicting them worsened performance. </p> <h3>Finetuning pre-trained foreign language monolingual FastText models</h3> <p>After we exhausted the HuggingFace monolingual model library, we trained monolingual FastText Bidirectional GRU models a la 2018 Toxic Comments, using pretrained embeddings for the test-set languages, to continue refining the test set predictions (albeit with a lower weight when combined with the main branch of predictions) and saw a small but meaningful performance boost (0.9536 to 0.9544). </p> <h3>Post-processing</h3> <p>Given the large number of submissions we were making, @rafiko1 came up with the novel idea to make use of the history of submissions to tweak our test set predictions. We tracked the delta of predictions for each sample for successful submissions, averaged them and nudged the predictions in the same direction. We saw a minor but material boost in performance (~0.0005)</p> <p>We were concerned about the risk of overfitting with this post-processing technique so for our final 2 submissions, selected one that incorporated this post-processing and one that didn't. It ended up working very well on the private LB. </p> <h2>Misc -</h2> <h3>Training setup</h3> <p>@rafiko1 and I kept separate code-bases. He trained on Kaggle TPU instances with Tensorflow code derived from the public kernels by @xhlulu and @shonenkov while I trained on my own hardware (dual RTX Titans) using from-scratch PyTorch code. I’d attribute part of our outsized merger ensemble boost (0.9510 -&gt; 0.9529) to this diversity in training. </p> <h3>What didn’t work</h3> <p>We went through a checklist of contemporary NLP classification techniques and most of them didn’t work probably due to the simple problem objective. Here is a selection of what didn’t work :- - Document-level embedders (e.g., LASER) - Further MLM pretraining of Transformer models using task data - Alternative ensembling mechanisms (rank-averaging, stochastic weight averaging) - Alternative loss functions (e.g., focal loss, histogram loss) - Alternative pooling mechanisms for the classification head (e.g., max-pool CNN across tokens, using multiple hidden layers etc.) - Non-FastText pretrained embeddings (e.g., Flair, glove, bpem) - Freeze-finetuning for the Transformer models - Regularization (e.g., multisample dropout, input mixup, manifold mixup, sentencepiece-dropout) - Backtranslation as data augmentation - English translations as train/test-time augmentation - Self-distilling to relabel the 2018 data - Adversarial training by perturbing the embeddings layer using FGM - Multi-task learning - Temperature scaling on pseudo-labels - Semi-supervised learning using the test data <br> - Composing two models into an encoder-decoder model - Use of translation focused pretrained models (e.g., mBart)</p> <p>We'll release our code later this week.</p>
University of Liverpool - Ion Switching
18th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm a lucky person, and I won 18th place and a silver medal. My solution is just a slight modification to the public wavenet notebook.</p> <p>feature engineering I used Data Without Drift from <a href="/cdeotte">@cdeotte</a> for the data. 42 features were used, including 3 lag features as well as <a href="/siavrez">@siavrez</a>'s wavenet notebook, signal*<em>2 and signal</em>*4 features based on <a href="/ragnar123">@ragnar123</a>'s notebook, gradient, low-pass and high-pass filters based on <a href="/martxelo">@martxelo</a>'s notebook, and the addition of <a href="/sggpls">@sggpls</a>' ION-SHIFTED-RFC-PROBA.</p> <p>network architecture The convolutional 16 base filters were changed to 24 (1.5x),And 3 LSTMs were added.</p> <p>augmentation strategy and others I normalized the features, added np.random.normal(loc=0, scale=0.01) to all the features, and flipped the data three times to make the training data 8x. GROUP BATCH SIZE (signals per sample) was determined to be the best at 6250.</p> <p>training The training was conducted twice. Optimizing for losses as normal at first. For the second time, I changed the weight regularization of L2 to factor from 0 to 0.0001 and LR to 1e-5. And optimize accuracy.</p> <p>prediction Predictions were also augmented. I added np.random.normal(loc=0, scale=0.001) to all the features and flipped the data 4 times, i.e., to predict the test data 10 times. In addition, I changed the GROUP BATCH SIZE from 6250 to 500000 13 times and averaged it and then used argmax. The final oofCV was 0.942421.</p> <p>Again, I'm just a lucky person. Excellent discussion and notebook sharing made me won a medal. Without these, my score would have been 0. Thank you to those who shared insights.</p> <p>Reference. They are my heroes. datasets ION-SHIFTED-RFC-PROBA(<a href="https://www.kaggle.com/sggpls/ion-shifted-rfc-proba">https://www.kaggle.com/sggpls/ion-shifted-rfc-proba</a>) Data Without Drift(<a href="https://www.kaggle.com/cdeotte/data-without-drift">https://www.kaggle.com/cdeotte/data-without-drift</a>) notebooks WaveNet-Keras(<a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a>) Wavenet with 1 more feature(<a href="https://www.kaggle.com/ragnar123/wavenet-with-1-more-feature">https://www.kaggle.com/ragnar123/wavenet-with-1-more-feature</a>) FE and ensemble MLP and LGBM(<a href="https://www.kaggle.com/martxelo/fe-and-ensemble-mlp-and-lgbm">https://www.kaggle.com/martxelo/fe-and-ensemble-mlp-and-lgbm</a>) Wavenet with SHIFTED-RFC Proba and CBR(<a href="https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr?scriptVersionId=33967419">https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr?scriptVersionId=33967419</a>) There are many others... Thanks for reading this far. I'm sorry that my English is bad.</p> <p>appendix: ``` df = lag_with_pct_change(df, [1, 2, 3])</p> <pre><code>df['signal_2'] = df['signal'] ** 2 df['signal_4'] = df['signal'] ** 4 # calc gradient g = df['signal'] n_grads = 2 for i in range(n_grads): g = np.gradient(g) df['grad_' + str(i+1)] = g # lpf x = df['signal'] n_filts = 5 wns = np.logspace(-2, -0.3, n_filts) for wn in wns: b, a = signal.butter(1, Wn=wn, btype='low') zi = signal.lfilter_zi(b, a) df['lowpass_lf_' + str('%.4f' %wn)] = signal.lfilter(b, a, x, zi=zi*x[0])[0] df['lowpass_ff_' + str('%.4f' %wn)] = signal.filtfilt(b, a, x) # hpf x = df['signal'] n_filts = 5 wns = np.logspace(-2, -0.1, n_filts) for wn in wns: b, a = signal.butter(1, Wn=wn, btype='high') zi = signal.lfilter_zi(b, a) df['highpass_lf_' + str('%.4f' %wn)] = signal.lfilter(b, a, x, zi=zi*x[0])[0] df['highpass_ff_' + str('%.4f' %wn)] = signal.filtfilt(b, a, x) </code></pre> <p>```</p>
Jigsaw Multilingual Toxic Comment Classification
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners!</p> <p>I would like to thank Kaggle and Jigsaw team for hosting this interesting competition. I am also grateful to this great community. Without those great public kernels I do not know how and where to start this project. I feel honored to end up this competition with a solo gold medal. It is a fantastic learning experience. </p> <p>As the private leaderboard is finalized, I decide to summarize what I have learned during this process. My solution has a lot in common with other top solutions. The two helpful techniques/tricks are <strong>ensemble of diverse models</strong> and <strong>post-processing</strong>. At certain point I also thought of using monolingual models as discussed in the <a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/160862">1st place solution</a>. But I failed because I wrongly used XLM-R.</p> <h1>Ensemble of diverse models</h1> <h2>Diverse models</h2> <p>For my personal understanding, langugae models are diverse if the train sets, tokenization methods or model architectures are different. In fact, I did not develop any methods from scratch. Instead, I started from public kernels, and improve them by trial and error. Below is a list of kernels I took advantage of for ensembling in my final submission. </p> <ul> <li><p><a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">XLM-R TPU on PyTorch using Pseudolabeled (PL) opensubtitles</a> by <a href="https://www.kaggle.com/shonenkov">Alex Shonenkov</a></p></li> <li><p><a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">The first functioning XLM-R model</a> by <a href="https://www.kaggle.com/xhlulu">xhlulu</a></p></li> <li><p><a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">XLM-R with fine-tuned MLM</a> by <a href="https://www.kaggle.com/riblidezso">Dezso Ribli</a></p></li> <li><p><a href="https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-two-stage-training">Two-stage BERT using translated dataset</a> by <a href="https://www.kaggle.com/miklgr500">Michael Kazachok</a></p></li> <li><p><a href="https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline">NB-SVM</a> by <a href="https://www.kaggle.com/jhoward">Jeremy Howard</a></p></li> </ul> <p>The final submission is an ensemble of 11 models, including two NB-SVM models trained with <a href="https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api">Google API translated multilingual train set</a>, two BERT models trained with English train, validation and test sets, one XLM-R model trained with downsampled unintended bias train set and original validation set, one XLM-R model trained with PL test set, one XLM-R model trained using multilingual train set, and one XLM-R model with multilingual train set and PL test set, two XLM-R models with fined-tuned MLM, and three XLM-R models trained using PL opensubtitles. The ensemble weights are found by intuition and probing the public leaderboard score. The weights are</p> <p><code>python weights = {} weights['bert_1'] = weights['bert_2'] = 0.4/2 weights['nbsvm_1'] = weights['nbsvm_2'] = 0.035/2 <br> weights['xlm_r_en'] =0.7; weights['xlm_r_pl_en'] = 0.3 <br> weights['xlm_r_multilingual']= 0.35; weights['xlm_r_pl_multilingual'] = 0.2 weights['xlm_r_mlm_en'] = weights['xlm_r_mlm_en_2'] = 0.25/2 <br> weights['xlm_r_mlm_multilingual'] = weights['xlm_r_mlm_multilingual_2'] = 0.25/2 weights['xlm_r_opensubtitle'] = weights['xlm_r_opensubtitle_2'] = weights['xlm_r_opensubtitle_3'] = 0.27/3 </code></p> <h1>Post-processing</h1> <p>As highlighted in many other top solutions, post-processing contributes a lot to AUC score, since what matters is not the accuracy of single prediction, but the rank/distribution of all predictions. As we need to predict six-language comments in the test set, it is likely that the model may over-predict toxicity for one language while under-predict toxicity for another language. Therefore, scaling the prediction for individual language might be helpful. I first tried to put a 1.3 scaling coefficient for <strong>fr, es and ru</strong>. It gave my a boost of around 0.002 Public LB score for my best ensemble model. I also tried this with two single model predictions and also saw a similar trend. It gave me confidence that this should work generally, at least for the models I used. So for as long as two weeks, I experimented on the scaling coefficients, in the end I found the following coefficients that work best for me.</p> <p><code>python scaling = { 'fr': 1.175, 'ru': 0.975, 'es': 1.475, 'it': 0.88, <br> 'pt': 0.8, 'tr': 1.0 } </code></p> <p>Another language-wise post-processing trick is to separate the prediction somehow, it leads to a tiny LB score increase.</p> <p>```python def post_process(d): # d: list of numbers representing a distribution d = np.array(d) q_low, q_high = np.quantile(d, q=0.55), np.quantile(d, q=0.95) mask_nontoxic, mask_toxic = d&lt;=q_low, d&gt;=q_high d[mask_nontoxic] = 0.9*d[mask_nontoxic] d[mask_toxic] = 1.1*d[mask_toxic] return d</p> <p>langs = test.lang.unique() for language in langs: mask = test.lang == language sub.loc[mask, 'toxic'] = post_process(sub.loc[mask, 'toxic']) ```</p> <h1>Things that worked</h1> <ul> <li>Ensemble</li> <li>Post-processing: language-wise scaling</li> <li>Add another dense layer before the final dense layer</li> <li>Train on more balanced data</li> <li>Train on PL test set</li> </ul> <h1>Things that did not work</h1> <ul> <li><p>Ensemble including logistic regression models trained and predicted on either English or multilingual data</p></li> <li><p>Monolingual models using XLM-R</p></li> <li><p>Add more dense layers</p></li> <li><p>Learning rate scheduler</p></li> <li><p>Preprocess the data</p></li> <li><p>other post-processing techniques</p></li> </ul>
M5 Forecasting - Uncertainty
11th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners! Below is a summary of our solution. Hope it will be useful to you in some way.</p> <h2>Overview of Solution</h2> <p>Our solution combines three types of models:</p> <ol> <li>Autoregressive RNNs that model the probabilistic distribution of the sales directly.</li> <li>Autoregressive RNNs to obtain the “inherent” uncertainty of the forecasted sales.</li> <li>Tree-based models to obtain bottom-up point estimates of the sales.</li> </ol> <p>The reason for having three types of models stems from the hierarchical nature of the time series -- certain models are more capable of modeling the uncertainty for a given level in the hierarchy.</p> <h2>Our Solution (with a little more detail)</h2> <p>As mentioned, we are dealing with hierarchical time series. On page 3 of the M5 competitors' guide, we are given a breakdown of the aggregation levels. Based on the behavior of the data, it seemed natural to us to divide the hierarchies into two types:</p> <ol> <li>The aggregated sales of individual products (series levels 10, 11, and 12) exhibit more “count-like” behavior. They are also generally more intermittent. For these series, modeling the uncertainty as a poisson or negative binomial seemed to make sense.</li> <li>The higher-level sales aggregations (series levels 1 through 9) exhibit a more continuous behavior. Modeling the sales distribution using some continuous probability distribution (e.g. Gaussian) seemed natural in this case.</li> </ol> <h3>- 1st Hierarchy Type</h3> <p>For the first hierarchy type, we directly modeled the distribution of the sales using an autoregressive RNN, specifically <a href="https://arxiv.org/pdf/1704.04110.pdf" target="_blank">DeepAR</a>. In a nutshell, DeepAR attempts to model the conditional distribution of the future of a time series given its past. During training, the model learns to parametrize a likelihood function of a chosen probability distribution (in our case, we chose a negative binomial distribution given the count-like nature of the data). Then, during inference, we can draw samples from the selected probability distribution parametrized by the model and compute quantiles from the drawn samples. </p> <p>We developed our DeepAR model largely from this great <a href="https://www.kaggle.com/steverab/m5-forecast-compet-uncert-gluonts-template" target="_blank">kernel</a>. </p> <h3>- 2nd Hierarchy Type</h3> <p>For the second hierarchy type, we initially tried to model the distribution directly using DeepAR with a Gaussian distribution. While the results were decent, we found that there was a high sensitivity towards the random seed used and performance occasionally varied substantially across folds. It turned out that we were unable to get the DeepAR model to capture the center (median) of the distribution consistently.</p> <p>We opted instead for a bottom-up approach, in which we predict the individual product sales and sum the respective predictions to get the predicted median of the aggregated sales. We found that LightGBM models from the M5 accuracy competition worked well for this. In particular, we adapted the LightGBM model from <a href="https://www.kaggle.com/kneroma/m5-first-public-notebook-under-0-50" target="_blank">this kernel</a>. Key changes made were:</p> <ul> <li>Removed “magic” multipliers</li> <li>Tuned hyperparameters to mitigate overfitting</li> <li>Feature engineering pertaining to price volatility</li> </ul> <p>Clearly, the bottom-up approach only gives us part of what we want (the median). To obtain the rest of the required quantiles, we turned back to our DeepAR (with Gaussian distribution) approach, where we used the ratio of the quantile prediction to the median prediction as a multiplier. For example, for a given series and prediction day, if the ratio of (say) the 25th percentile to the median in the DeepAR model is 0.95, then to get the 25th percentile of the corresponding LightGBM prediction, we simply multiply that prediction with 0.95. And we do this individually for all of the aggregated series across each time step in the horizon.</p> <p>We felt that this approach made sense in that the DeepAR model captures the “inherent uncertainty” in the sales, which should “carry over” to the predictions from the LightGBM model. Admittedly, this justification is hand-wavy and lacks any statistical rigor. Empirically, however, this approach resulted in stable improvements across folds and was certainly preferable to manually tuning “quantile multipliers”.</p> <p>Our approach can be summarized as follows:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1197762%2F3f97598e043e1828b9ba27994767d5ee%2Fm5_sol.png?generation=1593598292136374&amp;alt=media" alt=""></p> <h2>Validation Strategy</h2> <p>We realized early on that it was unwise to rely on a single 28-day validation horizon given the volatility of the competition metric, coupled with the ease of overfitting. Intuitively, we also felt that the recency of the folds matters. Bearing these findings/thoughts in mind, we ended up using the three most recent 28-day periods as our validation “folds” for our model development.</p> <h2>Some Stats and Findings</h2> <p>We report the weighted scaled pinball loss (WSPL) on our validation folds for some of our experiments:</p> <table> <thead> <tr> <th>Experiment</th> <th>D1858 - D1885</th> <th>D1886 - D1913</th> <th>D1914 - D1941</th> </tr> </thead> <tbody> <tr> <td>LightGBM + “point to uncertainty kernel”</td> <td>0.1967</td> <td>0.1739</td> <td>0.1672</td> </tr> <tr> <td>+ DeepAR (Gaussian) ratio multipliers for 2nd hierarchy type series</td> <td>0.1935</td> <td>0.1656</td> <td>0.1567</td> </tr> <tr> <td>+ DeepAR (Negative Binomial) for 1st hierarchy type series</td> <td>0.1744</td> <td>0.1512</td> <td>0.1414</td> </tr> <tr> <td>+ Ensemble LightGBM with modified “darker magic” kernel from accuracy competition</td> <td>0.1683</td> <td>0.1499</td> <td>0.1408</td> </tr> </tbody> </table> <p>The model in the last row was our final submission, yielding a private LB score of 0.16721.</p> <p>Some noteworthy findings:</p> <ul> <li>The magnitude of the WSPL can vary quite noticeably across different folds. However, the directionality of the loss is generally quite consistent across all folds (i.e., improvement in one fold is typically accompanied by the same relative improvements in all other folds).</li> <li>We found that training separate DeepAR networks for each individual product aggregation (series levels 10, 11, and 12) resulted in consistent small improvements in WSPL across all validation folds (~0.002). This could be due to DeepAR being a global model, and so by separating the series by their hierarchies, the separate models can better capture the nuances within each hierarchy. </li> <li>Training the same models using different seeds and ensembling them did not give any noticeable improvement. Neither the naive way of simply averaging the quantile predictions of each model, nor drawing samples simultaneously across each model and then computing the quantiles from the pool of samples worked.</li> <li>In the final days of the competition, we ensembled our LightGBM bottom-up model with the one from <a href="https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic" target="_blank">this kernel</a>, with slight modifications to reduce overfitting. As seen in the table above, this led to slight improvements in score.</li> </ul> <h2>Stuff that did not quite work/didn’t get to try</h2> <p>We attempted to model the distribution non-parametrically using a network called <a href="https://arxiv.org/pdf/1906.04397.pdf" target="_blank">DeepTCN</a>. While performance was decent, we couldn’t find a way to ensemble the predictions with our existing solution.</p> <p>Modeling the different hierarchies separately discards potential information gain from combining higher and lower aggregation levels. Unfortunately, we did not have the time to explore forecast reconciliation methods.</p> <h2>Closing Remarks</h2> <p>Special thanks to <a href="https://www.kaggle.com/steverab" target="_blank">@steverab</a>, <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a>, and <a href="https://www.kaggle.com/kyakovlev" target="_blank">@kyakovlev</a> for the aforementioned kernels that we leveraged. And of course, we would like to express our gratitude to Kaggle and the competition host for the opportunity to participate in this competition.</p>
M5 Forecasting - Uncertainty
48th place solution - seq2seq
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to all the winners and thanks to everyone for sharing their insights during and after the competition. This was my first forecasting competition/project so I learned a lot of new things.</p> <p>My best solution, which got me the 48th rank, was a simple ensemble of two architectures - a Dilated LSTM Sequence2Sequence model and a Sequence2Sequence LSTM model with attention (applied from all encoder outputs to hidden state of each step in the decoder).</p> <p>I have shared my code for both the accuracy and uncertainty competitions on github (<a href="https://github.com/arshjot/Kaggle-M5-Forecasting">https://github.com/arshjot/Kaggle-M5-Forecasting</a>) and will share some more details below.</p> <h3><strong>Features</strong></h3> <ul> <li><p>Calendar features (42):</p> <ul><li>Cyclic encoding for month, day and weekday (as explained here - <a href="https://www.kaggle.com/avanwyk/encoding-cyclical-features-for-deep-learning">https://www.kaggle.com/avanwyk/encoding-cyclical-features-for-deep-learning</a>) - 6</li> <li>Relative year (2016 - year) - 1 </li> <li>Embedding of size 16 for event_name. Both the event fields were passed through the same embedding layer and later concatenated - 32</li> <li>Snap fields with values as 0 or 1 - 3</li></ul></li> <li><p>Series IDs (63): Embeddings for all id fields - item_id (50), dept_id (4), cat_id (2), store_id (5), and state_id (2). In case of aggregated series (Levels 1-11), the grouped fields are kept as separate categories ('All'). For example - Level 2 series of state CA is categorized as:</p></li> </ul> <p>| item_id | dept_id | cat_id | store_id | state_id | | -- | -- | -- | -- | -- | | All | All | All | All | CA </p> <ul> <li>Previous day sales</li> <li>Sell Price (for levels 1-11, mean sell price of their constituent series was used)</li> </ul> <h3><strong>Normalization</strong></h3> <p>Only price and sales features were normalized. This was done by dividing the sales and price of each series by their means in the input window</p> <h3><strong>Training</strong></h3> <p>A sliding window of length 28*13 days on last 2 years data (with a step size of 28 days) was used for training. This meant for each window, 28*13 timesteps were given as input to the encoder and the succeeding 28 days served as the prediction horizon for the decoder.</p> <p>SPL loss was used for training with RMSProp optimizer, reducing learning rate, and early stopping.</p> <p>I also ended up using some lag/rolling features and addition of noise during training though these modifications did not result in a significant improvement in the validation score </p> <h3><strong>Validation</strong></h3> <p>A simple 3-fold conventional time-series validation split (last 3 periods of 28 days) was used, with the predictions from each fold averaged for the final submission.</p> <hr> <p>Interestingly, my models performed better in the uncertainty stream than the accuracy one both in terms of validation and leaderboard scores.</p> <p>I also wanted to try representation learning and graph learning algos (to account for hierarchy) but could not devote enough time to make them work.</p> <p>I hope my post/code turns out to be useful for someone. Please let me know if there are any questions :) </p>
M5 Forecasting - Uncertainty
Should have been 18th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Hello!</h1> <p>This was a wonderful competition, so thanks to Kaggle team and University of Nicosia for hosting it.</p> <p>To explain the topic of this discussion, I had one of the submission scoring 0.16959, which was not chosen because of a low public score than one of my other submission (wish they had the 2 submission system). Here is the screenshot: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2014978%2F816716719e72a00d1e204827a797419e%2Flb.png?generation=1593576683579204&amp;alt=media" alt=""></p> <p>(More about version 8 and version 9 below)</p> <h3>With that said, let's dive into the solution:</h3> <h2><a href="https://www.kaggle.com/akashsuper2000/concatenate-model-with-grus">Notebook</a></h2> <p>Based on @ulrich07 's notebook <a href="https://www.kaggle.com/ulrich07/do-not-write-off-rnn-cnn">here</a>.</p> <h2>Architecture</h2> <p>Single model with GRU layers and one final Dense layer</p> <p>Highly simplified representation: Concatenate() GRU(300) Dropout(0.3) Concatenate() GRU(300) Dropout(0.3) Concatenate() GRU(300) Dropout(0.3) Dense(9, activation='linear')</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2014978%2F6e894bbcdf51c6e07f65dd886818c4bf%2Fm5model.png?generation=1593578997970288&amp;alt=media" alt=""></p> <p>Epochs: 200 (version 8) Took me 6 hours to train on GPU. Training for more epochs would further diminish the loss but the risk of overfitting is high. And hence, the model trained for 400 epochs performed poorly in private leaderboard (See version 9 of my notebook).</p> <p>Link to my notebook: <a href="https://www.kaggle.com/akashsuper2000/concatenate-model-with-grus">https://www.kaggle.com/akashsuper2000/concatenate-model-with-grus</a></p>
University of Liverpool - Ion Switching
6th place solution, Factorial HMM with a shared factor
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our model is one of factorial hidden Markov models (FHMM). See this paper for details of FHMM. <a href="http://www.ee.columbia.edu/~sfchang/course/svia-F03/papers/factorial-HMM-97.pdf">http://www.ee.columbia.edu/~sfchang/course/svia-F03/papers/factorial-HMM-97.pdf</a></p> <p>The FHMM for this completion is illustrated in the figure bellow. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4168019%2F29feccf2b47f6aff20f69b5ef180a212%2FFHMM.png?generation=1590495412135845&amp;alt=media" alt=""></p> <p>\( M\) and \( N\) represent the number of channels and time steps respectively. \( Z_{mn} \in \{0,1,2,3\} \) represents the hidden state of channel \( m\) at time \( n\). The hidden state of each channel follows the 1st order Markov process independently with the shared transition probabilities; \( P(Z_{m,n+1}=k | Z_{mn}=j) =A_{jk} \). \( C_n\) is the number of open channels at time \( n \) and it is calculated as \( C_n = \sum_{m=1}^M F(Z_{mn}) \), where \( F(0)=F(1)=0\) (close states) and \(F(2)=F(3)=1\) (open states). \( S_n\) represents signal at time \( n\) and it is given by Gaussian emission; \( P(S_n| C_n) = \frac{1}{\sqrt{2\pi}\sigma} \exp \left[ -\frac{1}{2\sigma^2} (S_n-aC_n-b)^2\right] \). </p> <p>In our model above the adjustable parameters are \(A,a,b~{\rm and}~\sigma \). In the training phase \( A \) is optimized, and the other parameters are optimized in the prediction phase. </p> <p>(1) Training phase; obtain A <br> Using the time-series of open_channels in the train data, the transition probability A is optimized to maximize the likelihood function \( P(C)=\sum_{Z} P(C,Z) \), where \( C=(C_1,C_2,...,C_N) \) and \( Z =(Z_{11},Z_{21},...,Z_{M1},Z_{12},Z_{22},...,Z_{M2},...,Z_{1N},Z_{2N},...,Z_{MN}) \). The usual EM algorithm is used for this maximization. The computational cost is \( O({}_{M+3} C_{3} N) \) for one step of the EM algorithm. This cost is largely reduced from the cost \( O(4^{M+1} N) \) of the original FHMM in the above paper. <br> Below notebook shows the implementation for this phase. <br> <a href="https://www.kaggle.com/shimizumasaki/6th-place-solution-training-phase">https://www.kaggle.com/shimizumasaki/6th-place-solution-training-phase</a></p> <p>(2) Prediction phase; obtain \( a,b,\) and \( \sigma \) <br> In the test data predictions of the number of "open_channels" at each time point, \( C_{n} \), are calculated as bellow. Using the time-series of "signal" \( S \) in the test data and the transition probability obtained in the phase (1), the remaining parameters, \( a,b,\sigma\), are optimized to maximize the likelihood function \( P(S)=\sum_{C,Z} P(S,C,Z) \). This optimization is basically same as in (1). The posterior probability of \(C_n\) under given signal, \( P(C_n|S) \), can be obtained at the end of the EM algorithm for this optimization. Then, predictions of "open_channels" are decided from MAP estimation, \(Pr_n={\rm argmax}_{C_n} P(C_n|S) \). <br> NOTE: We fixed \( a \simeq 1.234 \) since the EM algorithm fails to obtain it in the sections with the lowest open activities. </p> <p><a href="https://www.kaggle.com/shimizumasaki/6th-place-solution-fhmm-with-a-shared-factor">https://www.kaggle.com/shimizumasaki/6th-place-solution-fhmm-with-a-shared-factor</a></p> <p>(3) noise removal <br> Using the predicted time-series of "open_channles", \(Pr \), the time-series \(S - aPr -b \) is decomposed into Fourier series by short-time Fourier transform. Then, several peaks of power spectrum are removed, and the inverse short-time Fourier transform is used to obtain clean signal. Phases (2) and (3) are repeated several times to clean signal gradually. See above notebook for details of our noise cleaning. However, we don't think our cleaning process is good way. </p>
University of Liverpool - Ion Switching
Gilles Vandewiele's A "better" but useless solution: Private 0.97266
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Gilles Vandewiele posted "<a href="https://www.kaggle.com/group16/private-0-9688-a-better-but-useless-solution">Private 0.9688: A "better" but useless solution</a>" (please upvote his kernel), revealing how to use the leak to score better than the 1st place private LB score.</p> <p>Since I had better score by using <a href="/ragnar123">@ragnar123</a>'s <a href="https://www.kaggle.com/ragnar123/clean-kalman">clean_kalman</a> dataset than <a href="/cdeotte">@cdeotte</a>''s cleaned data during the contest, I decided to use that in Gilles's kernel. Suprisingly "<a href="https://www.kaggle.com/sheriytm/private-0-97266-a-better-but-useless-solution">Private 0.97266: A "better" but useless solution</a>" scored 0.97266 on the private LB with a very low CV and LB score just like the original. Both scores beat the 2nd place private LB score and it is totally useless.</p> <p>This to me reveals the problem with the data better than anything. Share your comments below.</p>
iMaterialist (Fashion) 2020 at FGVC7
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2020 at FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Model</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F853978%2F378129d2e9afb90bbd1f320858b9b73b%2Fimat2020_2.jpg?generation=1591441185889910&amp;alt=media" alt=""></p> <p>I used a single Mask R-CNN model with the <a href="https://arxiv.org/abs/1912.05027" target="_blank">SpineNet-143</a> + FPN backbone and added an extra head to classify attributes. The attributes head was trained with the focal loss. For the augmentations, I used one of the <a href="https://arxiv.org/abs/1906.11172" target="_blank">AutoAugment</a> policies (see details below). No TTAs during the inference.</p> <p>All the changes were made on top of the <a href="https://github.com/tensorflow/tpu/tree/master/models/official/detection" target="_blank">TPU Object Detection and Segmentation Framework</a>. You can find the <strong>code and weights</strong> for the best model in <strong><a href="https://github.com/apls777/kaggle-imaterialist2020-model" target="_blank">this repo</a></strong>.</p> <p>Switching from the ResNet-50 to the SpineNet-96 backbone improved my score by on the LB by ~+0.07 (private), ~+0.05 (public). Not sure though how better the SpineNet-143 backbone was as I trained it with a slightly different configuration.</p> <h1>Data</h1> <p>I split the training data to training and validation datasets in a way that the validation dataset contains at least 10% of images for each class and each attribute. I ended up with 39932 images in the training dataset and 5691 images in the validation dataset.</p> <h1>Training</h1> <ul> <li>The model was trained on top of pre-trained on the COCO dataset weights.</li> <li>It was trained on resolution 1280x1280.</li> <li>The attributes head was trained with the focal loss. Switching to the focal loss improved the score by ~+0.012 (private), ~+0.018 (public)</li> <li>For the augmentations I used random scaling (0.5 - 2.0) and <a href="https://github.com/tensorflow/tpu/blob/2d9507360e3712715c584e2c21c639b39efd6ad1/models/official/detection/utils/autoaugment_utils.py#L126" target="_blank">v3 policy</a> from the Google's <a href="https://arxiv.org/abs/1906.11172" target="_blank">AutoAugment</a> implementation. I modified the code to make it working with masks as it supports only object detection case at the moment.</li> </ul> <p>The model was trained with a batch size 64 for 91.6k steps on a v3-8 TPU for ~69 hours. Big thank you to <a href="https://www.tensorflow.org/tfrc" target="_blank">TensorFlow Research Cloud</a> for giving me free access to TPUs, it helped a lot!</p> <h1>Predictions</h1> <h2>Attributes</h2> <p>For the attribute predictions, I used thresholds that maximize F1-score for each individual attribute within a category (so it's 294*46=13524 thresholds, but most of them actually &gt;1 as there are no training examples).</p> <h2>Best Predictions</h2> <p>I don't think the metric used in this competition was good. Unfortunately, it's not taking into account false-negative predictions at all. That basically means that with just 1 prediction per image the score of 1.0 is still achievable (I first asked about FNs in <a href="https://www.kaggle.com/c/imaterialist-fashion-2020-fgvc7/discussion/141891" target="_blank">this</a> discussion and later I also contacted the organizer by email to make sure it's not a mistake, but they assured me that the metric is okay.).</p> <p>So at first I just used one most confident class prediction per image. But high class confidence does not necessarily mean that the segmentation or attributes are good. So next I tried to compute a score for each prediction as an average of class confidence, category mask AP and category attributes F1-score, then I used it to select 1 best prediction per image. It improved my score on the LB by ~+0.041 (private), ~+0.036 (public).</p> <p>In the end, I implemented <a href="https://www.kaggle.com/c/imaterialist-fashion-2020-fgvc7/overview/evaluation" target="_blank">the metric</a> used in the competition, computed AP for each model prediction (based on the validation dataset) and trained a regression model to predict APs for the Mask R-CNN predictions. Then, as usual, I just used 1 best prediction per image based on the predicted APs. It improved the score on the LB by ~+0.065 (private), ~+0.058 (public). For the regression model, I ended up using a random forest regressor from the scikit-learn package (100 estimators, max depth 8). It was trained on features like category ID, class confidence, mask area, number of predicted attributes, category mask AP, category attributes F1-score, and so on - 13 features in total.</p>
M5 Forecasting - Accuracy
My solution, and yet another magic multiplier story..
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I started this competition pretty late, after the validation set labels were released, so I relied entirely on my own CV, which comprised of backtesting on a selection of 7 X 28-days windows, including recent windows, as well as windows in the same date range as the private leaderboard period from previous years (May/June 2015 and 2014).</p> <p>The model was a day-by-day GBT model, 1 model per each forecast horizon day. As discussed in several threads (e.g. <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/151339">here</a>), as well as [1], this has the advantage over recursive models, of not amplifying model bias in subsequent horizon days (at the cost of higher variance between different days). A single model was trained for all data slices (store, state, etc), to maximize cross-learning, especially since many individual item-store series were quite intermittent, as many people have noticed.</p> <p>Feature-wise, I used some of the usual suspects such as:</p> <ul> <li>moving window statistics (avg, median, std, skew, kurt, quantiles) on different lags (7, 14, 30, 60, 180 days)</li> <li>seasonal lags and statistics, for different seasonalities and lengths (same day-of-year, day-of-month, day-of-week)</li> <li>for holidays, besides categorical variables, I added a continuous "days to" variable per each holiday (which became positive after the holiday passes, and was bounded to 28 days around the holiday), assuming that sales may be influenced pre-/post- holidays (e.g. Christmas shopping etc). Perhaps as expected, looking at feature importance shows that the importance of the "days to" feature varies widely, from moderately high (e.g. for LaborDay, Easter, and Chanukah) to very low (President's Day, Cinco De Mayo etc). This may mean simply that consumer behavior is different for each holiday, or that effects on sales for some holidays are already encoded efficiently by the day-of-year feature</li> <li>categorical variables, as well as combinations thereof (e.g. state_id + dept_id) were target encoded, with no smoothing applied (since enough examples were available in each category in this dataset). Only high cardinality categorical variables were target encoded (e.g. state_id was not). Among these, the item_id|store_id encoding was the most successful</li> <li>price and derived statistics on it, as proposed Konstantin's great <a href="https://www.kaggle.com/kyakovlev/m5-simple-fe">kernel</a>, among which price_momentum proved to be very successful (on average, my top 5 features were store_id, dept_id, day-of-week, price momentum and the holiday category).</li> <li>for dealing with long "out of stock" gaps (around 20% of the data as discussed <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/138085#790628">here</a> I tried several approaches, including removing them (and all examples affected indirectly by gaps via lags or seasonality, e.g. days right after, or in weeks after an out-of-stock gap), as well as inputing values into these gaps (using seasonal weighted averages of past days-of-week/month/year). Perhaps surprisingly, inputation actually hurt CV scores, and gap removal only helped moderately. This is possibly due to not using a gap predictor model to correct for non-gap bias at evaluation time. Another approach, which I did not have time to try, could have been imputing past gaps using my own model (basically pseudo-labeling past gaps).</li> </ul> <p>Finally, I implemented some post-processing (aka "magic"), which in retrospect was a veery bad idea, and the source of a painful lesson :) On the assumption that predictions from GBT trees will basically be based on past sales of similar items on past windows, as clustered by the model (I trained on 830 past days from the target window), I assumed that the model must be biased towards lower values, which I attempted to correct based on a per-item yearly trend estimation (using only past data, aggregated per-item and per-department), basically a per-item "magic multiplier" vector. While this seemed to work well in my CV for the windows I looked at, it was sort of a last moment addition and I didn't have time to properly tune and test the trend generator. After some deliberation, I decided to submit the postprocessed version rather than the raw one (as you know, we were allowed a single submission in this comp), which gave me a score of 0.72667 on the private LB. This morning, I did a late submission with the same exact output, but with the "magic" removed on the private dataset part, which would have given me a private LB score of 0.58797 (or a jump of 614 places on the LB from top 13% to top 1,2%, or silver zone). Ouch!:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3220715%2Fbcad4d92dec187860bb1c8d35887b6d6%2Fsubs.png?generation=1593611584483631&amp;alt=media" alt=""></p> <p>Anyway, I guess the main lesson here, which many people seem to also have run into in this competition, is stay clear of postprocessing for non-stationary data, especially if not properly tuned / tested ;) I suppose a better per-item trend estimation, or item clustering better than per department might have worked if properly validated..</p> <p>I used LGBM (1 per horizon day), with a 'tweedie' objective, RMSE metric, max depth of 7, 1400 estimators, and no lambda regularization or feature subsampling, which was tuned by CV.</p> <p>Kudos to everyone that contributed to this comp, it was definitely a fun one!</p> <p>[1] "Recursive and direct multi-step forecasting: the best of both worlds", Souhaib Ben Taieb and Rob J Hyndman</p>
Jigsaw Multilingual Toxic Comment Classification
🏅 Best single models 🏅
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I wonder what were the best singles models around.</p> <p>From the write ups it seems like most people got XLM-R around 0.942* and heavily rely on ensembling, post-processing and other tricks (not diminishing their amazing achievement).</p> <p>Anyway, I'd like to share a <a href="https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model">single model</a> (weights and code) with pseudo labels and knowledge distillation that got to <strong>LB 0.9475</strong> (and <strong>0.9487</strong> after applying the hardcoded multipliers from @christofhenkel thx!) here <a href="https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model">https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model</a></p> <p>Unfortunately we failed to ensemble as well as others and did not explore the language modifiers beforehand. This comp was an amazing learning experience for me doing NLP for the first time in my life :D</p> <p>The main take out for me was that relying to heavily on pseudo labels seem to have made blending a much harder task (knowledge distillation seem to have worsen that issue). It took me too long to realize that but lesson learned for the next comps. I'll extend my report when I find some more time, but please let me know your thoughts on that! (and please share your models if you can)</p> <p>Cheers!</p>
Plant Pathology 2020 - FGVC7
0.98454(>1st) solution (RegNetY)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is NOT the 11th solution. I did not choose it.</p> <p>Model RegNetY-32GF  ・newest model from FAIR, this is powerful.  ・<a href="https://github.com/facebookresearch/pycls">https://github.com/facebookresearch/pycls</a></p> <p>Data augmentation (This idea almost is from <a href="https://www.kaggle.com/akasharidas/plant-pathology-2020-in-pytorch">https://www.kaggle.com/akasharidas/plant-pathology-2020-in-pytorch</a>)  Horizontal and Vertical Flip,  ShiftScaleRotate,  RandomBrightness,  IAAEmboss, IAASharpen,Blur,  ElasticTransform,  IAAPiecewiseAffine</p> <p>Imagesize  512x512</p> <p>scheduler  cosine annealing</p> <p>optimizer  SGD+momentum</p> <p>loss  Focal loss</p> <p>Inference  one fold, no TTA</p> <p>other  Pseudo labeling (0.98&gt;confidence)  This idea from my teammate xiaopeng! </p>
Jigsaw Multilingual Toxic Comment Classification
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Jigsaw to hold such amazing competition! And also thanks to my teammate <a href="/tonyxu">@tonyxu</a> for diligence and brilliant ideas. This is our second jigsaw competition and also the first time to join such multilingual nlp competition. </p> <h2>Brief summary</h2> <p>Before team merger, I mainly focused on how to train a single model with decent performance while my teammate worked on training diverse models, especially training on single language data and combine them together. Before merger, we both achieved 0.9487 on public leaderboard. As you can see, we worked in relatively different directions and achieved 0.9499 right after team merger. After team merger, we focused on training more diverse models for diversity, which gave us great boost but also caused us to ignore the post processing trick most top winners found.</p> <h2>Detailed summary</h2> <h3>Data sampling</h3> <p>We used both english training data and its translation shared publicly. Since there were tremendous amount of training data available, the main issue was not too little but too much. I mainly adopted hard sampling technique to sample the training data. </p> <p>First, I included all positives and randomly sampled negatives to train a XLM-Roberta Large model and then used it to make predictions on all training data including all english data and translated jigsaw toxic comment. </p> <p>Then for english data, we set a threshold for negatives while we took all positives due to the data unbalance. The data with gap between the predicted probability and label greater than the threshold were all taken and the data with gap smaller than that were sampled. The same logic for translated data except the threshold setting. A single threshold may cause issues because translation led to information loss. So I set a value range, say [0.3, 0.7] and any data in this range were all taken. I didn't convert soft label to hard 0/1 label because it worked better in the last jigsaw competition. The number of training data is ~934k. XLM-Roberta Large trained on this data could reach 0.9469 on public lb and 0.9458 on private lb. </p> <h3>Multilingual Models</h3> <p>Our multilingual models mainly consist of <code>XLM-Roberta-Large</code>, <code>XLM-Roberta-base</code>, <code>Bert-multilingual-base-uncased</code>. My teammate also did language model finetuning on <code>jigsaw-unintended-bias</code> data. We don't know the exact number for each of them used in the final ensemble but the best public scores are 0.9469, 0.9364 and 0.9325 respectively. </p> <p>In terms of model structure, I used a vanilla classification head on the CLS token of the second to last layer or the concatenation of last four layers while my teammate took the <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/103280">architecture of winning model</a> in the last jigsaw competition.</p> <h3>Monlingual Models</h3> <p>Just one week before the deadline, we found the monolingual language models in <a href="https://huggingface.co/models">huggingface communities</a>. We trained the models listed below on corresponding translated language, which gave us boost from 0.9507 to 0.9512 on public lb. </p> <p>&gt; * es: - <a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased#">https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased#</a> - <a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased#">https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased#</a> - <a href="https://github.com/dccuchile/beto">https://github.com/dccuchile/beto</a> <br> * fr: - <a href="https://huggingface.co/camembert/camembert-large">https://huggingface.co/camembert/camembert-large</a> - <a href="https://huggingface.co/camembert-base">https://huggingface.co/camembert-base</a> - <a href="https://huggingface.co/flaubert/flaubert-large-cased">https://huggingface.co/flaubert/flaubert-large-cased</a> - <a href="https://huggingface.co/flaubert/flaubert-base-uncased">https://huggingface.co/flaubert/flaubert-base-uncased</a> - <a href="https://huggingface.co/flaubert/flaubert-base-cased">https://huggingface.co/flaubert/flaubert-base-cased</a> * tr: - (128k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased">https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased</a> - (128k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-128k-cased#">https://huggingface.co/dbmdz/bert-base-turkish-128k-cased#</a> - (32k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-uncased#">https://huggingface.co/dbmdz/bert-base-turkish-uncased#</a> - (32k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-cased#">https://huggingface.co/dbmdz/bert-base-turkish-cased#</a> <br> * it - <a href="https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased#">https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased#</a> - <a href="https://huggingface.co/dbmdz/bert-base-italian-xxl-cased#">https://huggingface.co/dbmdz/bert-base-italian-xxl-cased#</a> - <a href="https://huggingface.co/dbmdz/bert-base-italian-uncased#">https://huggingface.co/dbmdz/bert-base-italian-uncased#</a> - <a href="https://huggingface.co/dbmdz/bert-base-italian-cased#">https://huggingface.co/dbmdz/bert-base-italian-cased#</a> <br> * pt - <a href="https://huggingface.co/neuralmind/bert-large-portuguese-cased#">https://huggingface.co/neuralmind/bert-large-portuguese-cased#</a> - <a href="https://huggingface.co/neuralmind/bert-base-portuguese-cased#">https://huggingface.co/neuralmind/bert-base-portuguese-cased#</a> <br> * ru - <a href="https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased">https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased</a> - <a href="https://huggingface.co/DeepPavlov/rubert-base-cased#">https://huggingface.co/DeepPavlov/rubert-base-cased#</a></p> <h3>English models</h3> <p>To add more diversities, We also trained <code>roberta-large, bert-large, albert</code> on english training data and made prediction on data translated into English and to our surprise, this did improve the performance! And we also had all cross-lingual models predict on english-translated test data and added into our final ensemble.</p> <h3>Training</h3> <ul> <li><a href="https://arxiv.org/abs/1905.05583">dynamic learning rate decay</a></li> <li>Adam optimizer</li> <li>binary cross entropy loss function</li> <li>learning rate of 1e-6 to 3e-6 for 2 epochs for large models and 3 epochs for base models</li> <li>after training on english and translated data, we both finetuned further on validation dataset for 1-2 epochs.</li> </ul> <p>My teammate had two different training schemes: * took <code>jigsaw-toxic-comment-train.csv</code> and its translated datasets to train 7 models, one for each language and combine prediction together in the way below</p> <p><code> test = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/test.csv") sub = pd.read_csv('/kaggle/input/toxu-submissions/xlmroberta-large-lm-all/submission.csv') test['toxic'] = 0.0 for lang in ['en', 'fr', 'es', 'it', 'pt', 'ru', 'tr']: lang_df = pd.read_csv(f'/kaggle/input/toxu-submissions/xlmroberta-large-lm-all/submission_{lang}.csv') idx = test[test['lang'] == lang].index test.loc[idx, 'toxic'] += lang_df.loc[idx, 'toxic'] * weight idx = test[test['lang'] != lang].index test.loc[idx, 'toxic'] += lang_df.loc[idx, 'toxic'] test['toxic'] /= 7 </code> * took <code>jigsaw-unintended-bias-train.csv</code> and its translation to train in the first epoch and used the same data in the above point</p> <h3>Postprocessing</h3> <p>After seeing the trick used in nearly all the top solutions, we felt it was really a pity that we missed it because once my teammate forgot to average predictions for 7 language-specific models, causing some predictions to be greater and 0.0001 improvement. We didn't delve deeper into this due to the tiny improvement!</p> <h3>Ensemble</h3> <p>Power averaging with power 3 in the <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/100661">2nd place solution</a> in the last jigsaw competition but didn't bring improvement compared with normal weighted average.</p> <h3>What didn't work</h3> <ul> <li>auxiliary task with 5 other labels in the training data</li> <li>sample weights for different languages</li> <li>extra labelled hatespeech data</li> <li>pesudo labelling</li> <li>multi-sample dropout</li> <li>cyclic learning rate</li> <li>checkpoint ensemble</li> <li>training multilingual bert from scratch on multilingual wikipedia data dump</li> </ul>
University of Liverpool - Ion Switching
Synthetic data for 0.945 public LB solution (no training data used at all)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition was unusual in that we have data with some level of unremovable aleatoric noise (gaussian with a given std) added to sequence which we can model (the superposition of multiple HMMs). There's a cap to what we can achieve (given that is not possible to fully predict the output of Y), so our job is to: a) capture the underlying distribution of the GT (Y), and b) mimize our model's epistemic noise.</p> <p><strong>Synthetic data</strong></p> <p>It turns out you can build as much GT (Y) as you want but running the Matlab scripts provided by <a href="/richardbj">@richardbj</a> by modifying the max number of channels to 1) and get multiple runs of what they call M0 and M1. Then to generate either 3,5,10, etc. max open channels you just add non overlapping outputs. That's how I generated ground truth.</p> <p>The next challenge comes from modeling X after Y. In their paper, X is the result of a transformation of Y (amplifier) plus added noise (50 Hz + thermal) + non-linearities and/or delays. I built a tiny network to model this whose job was to minimize the energy of the residual of F(Y) vs. X; which results in a convolution kernel shifted ~0.03 samples to the right (delayed):</p> <p>Convolution kernel <img src="https://i.imgur.com/FswlWPk.png" alt=""></p> <p>Then I modeled the gaussian noise of multiple channels:</p> <p><code>train_std_by_type = { ('l', 1): [0.24515989422798157, 0.24703997373580933], ('h', 1): [0.24486009776592255, 0.2447292059659958], ('h', 3): [0.265836238861084], ('h', 10): [0.4045635759830475, 0.40377894043922424], ('h', 5): [0.28642651438713074, 0.28378984332084656]} </code></p> <p>...where <code>l</code> is the low-probability model (M0) and <code>h</code> is the high-probability model. Why there is more noise as more channels get added I think is the weak point of my modeling since the noise should be independent of number of open channels; anyway, with all the above we can now generate unlimited training data:</p> <p>Synthetic private: <img src="https://i.imgur.com/51Z3VUq.png" alt=""></p> <p>Synthetic public: <img src="https://i.imgur.com/Pe02luW.png" alt=""></p> <p>Synthetic train (small segment, orange) vs. Real train (blue) <img src="https://i.imgur.com/aXnVeFm.png" alt=""> (there's a small shift/offset which is intentional, see below)</p> <p>FWIW, if you think about it I could (should) have built a GAN to generate indistinguisable synthetic data and not model manually.</p> <p>I also removed (substracted) perfectly phase-aligned 50 Hz waves to remove 50 Hz components.</p> <p><strong>Architectures</strong></p> <p>With the above I built balanced training data and some architectures and got me 0.945 in public LB with a single model (no ensembling, no averaging, etc.). </p> <p>I used synthetic data modeled after public LB, private LB, train set, etc. to track my model. I tried numerous architectures (wavenet, "simple" but overparametrized convolutions, RNNs, QRNNs, Transformers and even the Reformer); no matter the architecture, the ceiling was always 0.945 with a single model. I also built an L2 model which didn't help.</p> <p>The final architecture has two aspects: 1) a 6-depth conv layer (nothing fancy). This branch has limited view of the signal which is addressed by: 2) a histogram network that is given the whole segment (100,000 samples), builds an histogram and then builds a dense representation (256 features) of the histogram, and then final thresholding network (just bunch of FC layers) happens combining both. The histogram is built using coordconv-like 1d network but other networks work well.</p> <p><strong>Unknowns, and augmenting the synthetic data</strong></p> <p>Some unknowns were still problematic the the test data: * The offset was unknown * The noise was unknown * The amplification was unknown</p> <p>I added random offset (+- 0.01) which did help. I also added variable amplification which also helped (but less), and finally and most promising I also added variable (gaussian) noise which DID NOT help and I still don't know why. In retrospect I think I should have added a few extra heads to predict the noise std, amplification, and offset so the net would leverage that information to get better predictions. I didn't do it, thinking that with unlimited data, 3 GPUs, and ~8 hours of training the net would learn it by itself. It didn't.</p> <p><strong>Code</strong></p> <p>Code is at <a href="https://github.com/antorsae/liverpool-ion-switching">https://github.com/antorsae/liverpool-ion-switching</a> it uses Fastai2 and little more. </p>
Herbarium 2020 - FGVC7
4th method and code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, we would like to share our method and the code link is <a href="https://github.com/shijieliu/herbarium2020_4th">herbarium2020_4th</a></p> <h1>Methods</h1> <p>our method mainly refers the paper <a href="https://arxiv.org/pdf/1904.04232.pdf">A CLOSER LOOK AT FEW-SHOT CLASSIFICATION</a> and requires 2 stage training, and we do a little change to get the best result 1. we use 448 * 448 for training and test 2. we use seresnext50 as our backbone compared with senet154, se_resnet152 3. we add half crop in image aug stage 4. we do not use cosine distance because cosine distance leads to slow converge and low precision 5. we remove No.23079 class in train and test</p>
M5 Forecasting - Accuracy
111th Place Solution (7th on Public) - Key Ideas
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ul> <li>Week-by-week model (one model for first week with lag_7 feature, one model for second week with lag_14 feature, one model for third week with lag_21 feature, one model for fourth week with lag_28 feature). Recursive model works on Public, but don't work on CV and Private.</li> <li>Target Encoding for item_<em>id, month, store</em>_id, mday, event__name, (hand written target encoding is used due to memory limitations).</li> <li>Precise hyperparameter tuning (gives big gain). </li> <li>Special Features : <ul><li>Average Lag (lag_7 + lag_14 + lag_21 + lag_28 + lag_35 + lag_42) to capture weekday pattern</li> <li>Expected Sales (lag_7 + (mean_7 - mean_28)) to capture sales trend</li> <li>Price Diff (sell_price - sell_price_lag_7)</li></ul></li> <li>One model in ensemble: y = mean_28(x) + LGBM.predict(x). y_train = y - mean_28</li> </ul>
Jigsaw Multilingual Toxic Comment Classification
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my wonderful teamates <a href="/naivelamb">@naivelamb</a> <a href="/wuyhbb">@wuyhbb</a> <a href="/terenceliu4444">@terenceliu4444</a> <a href="/hughshaoqz">@hughshaoqz</a> , Thank you for helping me to get my 5th gold medal!</p> <p>Congratulations to all winners!</p> <h1>Summary</h1> <p>Here's main ideas that worked for us.</p> <ul> <li>Translated Data</li> <li>Pseudo Labelling</li> <li>Multi-Stage Training</li> <li>Freeze Embed Layer</li> <li>UDA (<a href="https://arxiv.org/pdf/1904.12848.pdf">https://arxiv.org/pdf/1904.12848.pdf</a>)</li> <li>Mono Language Modeling</li> <li>Test Time Augmentation</li> </ul> <h1>Details</h1> <h3>Translated Data</h3> <p>We use translated data listed here for training and testing (for TTA)</p> <p><a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/159888">https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/159888</a></p> <p>Thank you guys for publishing this excellent datasets!</p> <h3>Pseudo Labelling</h3> <p>We trained some baseline models then blend them with some public submission files to get to LB <code>0.9481</code> then use it as Pseudo Label till the end.</p> <ul> <li><a href="https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta</a></li> <li><a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta</a></li> </ul> <p>When training on this test data with pseudo label, we found that use all test data by KL-Div loss with soft label gave us best performance.</p> <h3>Multi-Stage Training</h3> <p>We all know that fine-tuning the model on validation set after training on train set can boost the LB score, and we call it <code>2-stage training</code>. (<a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta</a>) By adding more stages on our training pipeline we were able to further boost our LB score. Some of our pipeline is like:</p> <ul> <li>Pseudo Labelling (5epo) -&gt; Train1 (1epo) -&gt; Valid (3epo)</li> <li>Train2 (1epo) -&gt; Train1 (1epo) -&gt; Valid (3epo)</li> <li>Train1 (1epo) -&gt; Pseudo Labelling (3epo) -&gt; Valid (3epo)</li> <li>...</li> </ul> <p>(Train1: training data of the 1st jigsaw competition on 2018) (Train2: training data of the 2nd jigsaw competition on 2019) (Pseudo Labelling: test data of this competition) (Valid: validation data of this competition)</p> <p>Note that we usually don't train on the full dataset when using Train1 and Train2, but a subset of it.</p> <h3>Freeze Embed Layer</h3> <p>Freeze Embed Layer of transformers can save the GPU memory so that we were able to use bigger batchsize when training, while speeding up the training process.</p> <p>By Multi-Stage Training, Pseudo Labelling and Freeze Embed Layer we were able to get to LB <code>0.9487</code> without blending any public submission file.</p> <h3>Test Time Augmentation</h3> <p>We do the inference on all 6 languages, and the blending weight between the original language and the other 5 languages is 8:2 We got to LB <code>0.9492</code> by this TTA.</p> <h3>UDA</h3> <p>reference: <a href="https://arxiv.org/pdf/1904.12848.pdf">https://arxiv.org/pdf/1904.12848.pdf</a></p> <p>Then we blend some models that used UDA while training to get to LB <code>0.9498</code>, by some further tuning we got to LB <code>0.9500</code>.</p> <h3>Mono Language Modeling</h3> <p>At the last week, we found that blend with models that only trained on single language data can also boost the CV score. But since in validation data we only got <code>es</code>, <code>it</code>, <code>tr</code>, so we only do <code>Mono Language Modeling</code> on these 3 languages.</p> <p>By blending models that only trained on <code>es</code> or <code>it</code> or <code>tr</code>, we finally got to our best public LB <code>0.9504</code> and this is our best submission on private LB as well. </p> <hr> <h1>Thank you for reading!</h1>
Jigsaw Multilingual Toxic Comment Classification
13th Place Solution Part I. My first gold medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, a huge thank you to my teammates <a href="/euclidean">@euclidean</a> <a href="/nullrecurrent">@nullrecurrent</a> <a href="/luohongchen1993">@luohongchen1993</a> <a href="/sherryli94">@sherryli94</a>, you guys are so creative and diligent, and I learned tremendously from all of you. This gold medal is the first gold for all of us, and I feel so honored to share this experience with you. Cheers! </p> <p>And congratulations to all medal winners :)</p> <p>Most of the top winners have already covered many of our "what worked/what didn't work", I want to use this post to record some points that I think we did differently. </p> <h3>Pipeline</h3> <p>Our final pipeline is: 1) train on a selective part of training set 2) train on another part of training set + PL on test set 3) 5cv on validation + albumentation of validation (thanks to <a href="/shonenkov">@shonenkov</a> 's kernel) </p> <p>In 5cv, using soft label (validation prediction + validation true label)/2 improves 5cv. Some very controversial labels become ~0.5 which is intuitive to me. </p> <p>Also, randomize the order of samples in 5cv while keeping the same comments (original and its augmentation) in the same batch was helping to add additional variability. Our number of augmentation vs original comments was 1:1. More augmentation didn't help on LB for us. </p> <h3>Training Diversity</h3> <p>We added many variability in training using LR decay, Cosine annealing, SWA, class weights, sample weights (on augmentation/language), use pre-trained models, freeze embedding layers/encoding layers. Except for XLMR, we also tried LSTM, etc. These model may not work well alone, but the diversity helped in our stacking phase. </p> <h3>Stacking</h3> <p>We ended up with 1900+ first level features in stacking, which is quite amazing. We eventually passed 2 StackNet models and 1 forward selection model (all with lb 9494) to post processing phase. </p> <p>Initially greedy based forward selection ensemble overfits validation auc massively. But adding random initialization, and adding logloss criteria along with auc criteria helped to get a well balanced blending. </p> <h3>Post Processing</h3> <p>We scaled up the predicted probability of comments that contain cursed words of different languages. That improved about 3 bps for each stacking model, and rank mean of them finally sent us to the edge of gold zone. </p> <h3>Things that didn't work for me</h3> <ul> <li>Using PCA of embedding to construct similar training set compared to validation and test set. </li> <li>PCA selected train set to substitute validation set. </li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F834077%2Fb41ed87175b5b6ac45a779a7afc40ab1%2Fttagoutou.jpg?generation=1593209576045814&amp;alt=media" alt=""></p>
University of Liverpool - Ion Switching
15th solution (wavenet only)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I finished my best solution(I thought)with unexpectedly good score in last day , but no time to finetune.....</p> <p>The following model got the best score for me but it is not "beautiful" :(.</p> <p>OS: windows 10</p> <p>framework : tensorflow 1.4/keras 2.2</p> <p>dataset : 'clean-kalman/train_clean_kalman.csv' and remove batch8</p> <p>model:modified wavenet based on <a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a></p> <p>optimizer : adam (adam &gt; ranger &gt; sgd)</p> <p>loss : bce + weighted macro_double_soft_f1 (bce &gt; ce = focal loss &gt;macro f1)</p> <p>batch : 16</p> <p>feature : <strong>rfc /lag_with_pct 123/ roll_stats 5 /gradient1-4</strong> added in the beginning <strong>abs(sig) , exp(sig) , sig^^2</strong> added in model layer (after augmentation)</p> <p>augmentation : flip / gausision noise +-0.0075/ random shift 5</p> <p>post processing : flip TTA+ Group 5 fold averaging</p> <p>*key in early stage without rfc feature : use batch 1-4</p> <p>*key2 : train with low lr and without earlystop reduce_lr = ReduceLROnPlateau(min_delta = 0.000001,cooldown = 3,factor=0.3, patience=15, min_lr=1e-6, verbose=2)</p> <p>early_stopping = EarlyStopping(patience=1000, verbose=2)</p> <p>*key3 : loss function</p>
Herbarium 2020 - FGVC7
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for the delay, I'll briefly describe my solution. </p> <h1><strong>Methods</strong></h1> <p>I used lip-r50 and seresnext101 as the backbones, and 448 * 448 resolution for training. To save resources, I used 352 * 352 resolution to train seresnext101 and fintune with 448 * 448 resolution in last few epochs. For both of two backbones, I trained for 120 epoch in cosine decay, using labelsmooth and mixup(balanced sampling). I chose the standard cross entropy loss as the loss function. During testing, 576 * 576 resolution and tencrop were used. Finally, I ensembled the two models above to get the final softmax score(seresnext101_softmaxscore * 0.6 + lip-r50_softmax_score * 0.4).Just like other teams, rescore was used to refine the prediction, becase each category has no more than 10 samples in the test set, so I sorted the samples in test set by conficence score, for those samples whose top-1 category already has 10 samples, I changed them to top-2 category instead, and resort the score. And we got the final score.</p> <p>I also spent a lot time training some offline classifiers(such as knn and svm) using cnn features, but got little improvement.</p>
Abstraction and Reasoning Challenge
8th Place Solution (Part 1) - Object detection
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to Kaggle and Francois for hosting this awesome competition. Thank you to my teammates for doing an awesome job. Thank you to <a href="/meaninglesslives">@meaninglesslives</a> for giving hope in this challenge by breaking the 1.00 submission and shareing it publicy.</p> <p>Congratulations to every medalist in this competition. It got realy close in the 0.862 region and it was fun to see the top of the LB especially in the last hours in the race for price money.</p> <p>Congratulations to <a href="/icecuber">@icecuber</a> on winning this competition. </p> <p>As there are many different approaches needed in this competition our team decided to split our results in 3 parts.</p> <p>Part 1: My Solution is about simple object detection which solves 6 tasks in public LB. I put it together in a kernel (a bit messy - did not spend time yet on cleaning code, but added some visualisation which I hope help in understanding the code; feel free to ask any question):</p> <p><a href="https://www.kaggle.com/jpbremer/very-simple-dsl-solving-up-to-6-tasks">https://www.kaggle.com/jpbremer/very-simple-dsl-solving-up-to-6-tasks</a></p> <p>It solves 4 tasks in its current form. With some more properties/ luck it was able to solve 6 tasks in total on LB.</p> <p>The pipeline is:</p> <ol> <li>evaluate properties from the training outputs (e.g. how many colors in output, symetries, ...)</li> <li>random crop input</li> <li>check if crop has the same properties as the properties retrieved from training tasks if yes: append if no: repeat from step 2</li> </ol> <p>I am looking forward to interesting approaches by different competitors and would like to see the test tasks. (especially the one nearly all public kernels solve). </p> <p>I am proud to become a Kaggle Master in this awesome community. Kudos to everyone involved in this competition!</p>
M5 Forecasting - Accuracy
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I learned a lot from the community in this competition, so I wanted share my solution and some learnings I had from the competition as well. </p> <p>Before I go deeper, I wanted to first thank the hosts, Kaggle, and the community as a whole. I learned a lot from this experience and am glad to have done it. Credit also to these two notebooks where a lot of my inspiration came from: * <a href="/kyakovlev">@kyakovlev</a> : <a href="https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic">https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic</a> * <a href="/kneroma">@kneroma</a> : <a href="https://www.kaggle.com/kneroma/m5-forecast-v2-python">https://www.kaggle.com/kneroma/m5-forecast-v2-python</a></p> <h1>Solution</h1> <p>My solution itself is not too innovative, but how I got there might be interesting. One of the big things I learned from this competition is the importance of having a reliable and fast cross-validation method. Common with themes in other posts, I ended up finding that simpler was better. With the cross-validation, I pruned a lot of features that I found did not provide a large benefit to keep the model simpler.</p> <h2>Algorithm</h2> <p>Gradient boosting trees (LightGBM), one model per store</p> <h2>Features</h2> <ul> <li>Item id, department id, category id</li> <li>Price features: current price, price relative to past prices, statistics on the price (max, min, standard deviation, mean… etc.)</li> <li>Sales features: recursive 7-day, 14-day lagged rolling means + 28-day lagged rolling means. No lagged rolling standard deviations</li> <li>Date features: date attributes (e.g. month, year, is weekend, day of week, … etc.), nearest upcoming/past event name, event name 1, event name 2, SNAP, no event type</li> </ul> <h2>Cross-validation / hyperparameter optimization</h2> <ul> <li>It took me a while to come up with a cross-validation approach I was happy with, trying to balance accuracy and speed. I took the data for stores CA1, TX2, and WI3 to increase training speed, and used time periods d1550-d1577, d1858-d1885, d1886-d1913 as my cross-validation periods. For each period, I trained a model with all data before the period, and then used the models to predict the sales within the period.</li> <li>I tested the sensitivity of the method by running it on the same model with different seeds to see how widely the scores varied by random chance.</li> <li>Even with those optimizations, cross validation took a while: ~24 hours for 40 parameter sets</li> <li>I tuned LightGBM parameters with <a href="https://github.com/hyperopt/hyperopt">hyperopt</a>.</li> <li>A shoutout also to <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/148273">this thread</a> for helping to improve LightGBM training speed by ~2x.</li> </ul> <h2>Things that I tried but did not include</h2> <p>I think the following, if done better, may have helped. However, I found the benefit of these improvements to be marginal so opted not to use them for simplicity. * Predicting overall sales for each day and scaling predictions accordingly. I did notice that if I could perfectly predict overall sales for each day, the scores would improve tremendously. * Weighting data points * Predicting number of dollars spent per item instead. * Removing "out-of-stock" periods from the training dataset.</p> <p>For those that are interested, I also wrote a quick <a href="https://medium.com/&lt;a href=">@eugene</a>.c.tang/things-i-learned-from-my-first-kaggle-competition-362409a9f5b0?source=friends_link&amp;sk=17e49797b631c13586350c308b4e5603"&gt;blog post on some high-level learnings.</p> <p>Hope you might find some of these thoughts helpful and happy to answer any questions!</p>
M5 Forecasting - Uncertainty
24th position solution - with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Global thoughts</h1> <p>I was aiming for gold and i’m extremely disappointed for not getting it. When seeing topics such <a href="https://www.kaggle.com/c/m5-forecasting-uncertainty/discussion/163368">as this one</a>, and people with basic solutions getting top ranks, i’m jealous. Due to private datasets was different, models with wider quantiles might have been advantaged. </p> <p>At the same time when reading approach like <a href="https://www.kaggle.com/c/m5-forecasting-uncertainty/discussion/163368">1st Place Solution</a> I can only admit they are better, and avoid being a sore loser :) </p> <p>Anyway, since few people published their codebase, benchmark and that i’ve a different approach than most people, i think it might be worth sharing it with you. I was expecting to publish a more detailed write up with plots (publish my first paper maybe?) and since not getting gold I’m demotivated :) </p> <p>I also spent 1.5 months optimizing pinball loss in the wrong direction (&lt; instead of &gt;=). It was not really clever.</p> <p>After writing this I just submitted my score with a 0.97 adjustments (didn’t try other adjustments). It would have placed me first place. I’m even more frustrated ^^ <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2F1e516821f059bd266f989f12ff8a9ec8%2Fcoef_adjustement_lb.png?generation=1593722614422025&amp;alt=media" alt=""></p> <h1>Code</h1> <p><a href="https://gitlab.com/JacquesPeeters/m5-forecasting">My gitlab repository. </a></p> <h1>Bias in the data</h1> <p>By plotting the number of active items over time, we can see that the global growth is not due to growth of walmarts stores’ but probably due to the bias on how the dataset was constructed. </p> <p>Therefore when forecasting an aggregated serie, actually it was not historical total sales of the series but the sales of selected products active at that time. </p> <p>Interestingly total sales are going higher but average sales per product is going lower. It can probably be explained by survival biais. Items sold from multiple years/a long time are only “survivors'' because they are BestSellers. Items which had lower performance in early years (2012/13) were killed and replaced with newer products in our dataset. </p> <h1>Data augmentation</h1> <p>A date can be predicted from different horizons (between 1 and 28). Each time series is small data therefore I expected significant better results with data augmentation. </p> <p>Note : I performed this only for granularities not related to item’s.</p> <h1>Data downsampling</h1> <p>I assumed item granularities can be seen as a pointwise approach. For performance reason I allowed myself to downsample these granularities by half. We prefer to sample randomly rather than removing part of the data (between two dates) because we assume that unexplained variance is mostly due to seasonal factors rather than item’s behaviour.</p> <h1>Datasets</h1> <p>I often focus a lot on FE, during this competition i wanted to focus more on the benchmark part and software engineering first, then on FE. I still performed a lot of FE. <a href="https://medium.com/manomano-tech/a-framework-for-feature-engineering-and-machine-learning-pipelines-ddb53867a420">You can find a medium post where I describe my philosophy about horizontal scaling if you are interested in. </a></p> <p>Note : When hierarchical adjective is used, it means that features are “propagated” to lower granularities. Eg : event_trend of total_id can be added to category_id granularities. </p> <h2>Weather_[state_id, total_id]</h2> <p>I’m tried to unbiased past sales data with temperature from the date from we are predicting / average temperature from the past. Did not seems to help much. I was not using temperature from predicted dates and therefore complaint with rules. </p> <h2>Fe_sales_historical</h2> <p>Historical average sales of the serie on differents time windows [3, 7, 14, 28, 56, 112] Historical quantiles [the 9 quantiles to be predicted, target encoding in a way] on different time windows again [28, 56, 112] Historical average sales of the serie on the same day of week on windows [1, 2, 3, 4] Historical average prices multiples windows</p> <h2>&nbsp;Dayofyear_trend_hierarchical</h2> <p>Out-of-fold (each year is considered a fold) target encoding of the average trend of date_to_predict / day_of_year of date_from</p> <p>It was funny to see the trend of hobbies categories pre/post christmas. </p> <p>Predicting date t depends of : 1) on which days of the month historical sales were computed (including the first saturday of the month have a huge impact) 2) the horizon of the forecast (do we predict the first or last saturday of the month)</p> <p>It is a 2-D variable problem which is extremely difficult to deal for trees. This trick makes it 1-D.</p> <h2>Yearly_trend_day_hierarchical</h2> <p>Same as previous but based on [date_from_month, date_from_day, date_month, date_dayofweek, date_month_week] which is a bit different and expected to be a bit more robust. </p> <h2>Monthly_trend_day_hierarchical</h2> <p>Same but based on day of month</p> <p>It was sad to see the end of the month was way more extreme in Wisconsin &gt; Texas &gt; California and that it is correlated to GDP … So from data we know that more people in Wisconsin struggle at the end of the month. Number of days since last sales Didn’t help much</p> <h2>Event modeling</h2> <p>The fun part, greatly help local validation.</p> <p>Use a two-stage kind of causal inference approach. * Train a model with all features available. * Compute out-of-fold median trend of an event trend = pred/real</p> <p>Median was preferred over mean, due to the lack of data, i felt it was more “robust” but it is subjective choice</p> <p>However, there are multiple events on the same date. We need to rank events and assign one and only one to a given date. To do so the same approach is used : * Train a model with all features available * Find event with strongest absolute trend abs(1 - pred/real) * Assign strongest event to the date </p> <p>This trick allowed me to detect that Fathers’ day have a strongest impact than NBA finals therefore was prioritized</p> <p>Find below the more extreme events for serie total_id. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2Fe2ba11d7c77d70299f3957e99190cec0%2Fevent_trend.png?generation=1593724278664540&amp;alt=media" alt=""></p> <p>Edit : i introduced a bug while moving from mean to median this w-e, and it was not out of fold...</p> <h1>Training</h1> <h2>Hyper-parameter tuning</h2> <p>Due to the amount of models trained, it is difficult to optimize each approach. For a lightgbm approach a single parameter is over-written, namely the number of observations set-up to 100. Given we have a data-augmentation of 28, 5 years of data, we want to avoid overfitting events with single leafs. Eg : creating one leaf for each New-years year. </p> <h2>Normalizing</h2> <p>Once again as the item's granularities can be seen a pointwise approach, normalizing doesn’t make sense. On the opposite side, time series approach benefits from it. </p> <p>I normalized by sales_mean_[14, 28, 56, 112]. Only variables related to sales and the target were normalized. Variables related to price weren’t.</p> <h2>Train/validation split - Retraining on full data</h2> <p>Last XX days of data was used for validation. </p> <p>Training/validation was used to train and predict on out-of-fold_validation set. This set was used for stacking. Validation/out-of-fold_validation sets were randomly sampled and not in a timewise fashion. I might be over-fitting because of this random sampling.</p> <p>Retraining on full data was natural due to the nature of the competition. </p> <p>The train/validation training was used solely for validation purposes, out-of-fold stacking, analysis of features importance, and finding the right number iterations of LightGbm models.</p> <h2>Benchmark of approaches</h2> <p>Due to the fact that multiple solutions were possible, my initial goal was to set up a benchmark framework for testing the maximum of them. What i tried : * LightGbm Quantile regression * Tensorflow quantile regression * Regression_l2-to-quantile (regression l2 then quantile regression on top of predictions) * Tweedie-to-quantile regression (tweedie regression then quantile regression on top of predictions) * Ngboost-distribution (compute quantile based thanks to prediction + standard deviation) but it was way too slow and not good</p> <p>Each approach performs differently on granualirities.</p> <p>Quantiles from Regression_l2-to-quantile were learned on validation set, and not from training set nor out of fold. Therefore I might be overfitting there :( <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2F5d896ef6ab84bf101b2d38a3835b2c41%2Fbecnhmark_m5.png?generation=1593722518091417&amp;alt=media" alt=""></p> <p>Overall performance after stacking. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2Fd5333abc5f3c196bf66e639f3becb87f%2Foveral_perf.png?generation=1593722491787991&amp;alt=media" alt=""></p> <p>There are huge improvements, but I think a bit over-estimated due to having a random validation/out_of_fold validation split as already explained.</p> <p>See you next time! </p>
Tweet Sentiment Extraction
Worst place writeup (15th solution) with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow, never thought I'd finish one rank short of gold. I was 49th the last morning day of the competition and had stagnated for weeks on the LB. Fortunately, I decided to train my best models on unprocessed data the night before, and first fold scores showed promise when I went to bed. Still, getting to successful submissions on Kaggle took some frantic work during last day. And scoring for my best submission ended literally during the last minute of the competition. I documented the setbacks and issues I got last day elsewhere, I won't repeat here. Let's say I consider myself lucky with the end result.</p> <p>I'll describe my solution in this post, then what I tried and that did not help much.</p> <p>Before diving into details, I want to thank Kaggle for this challenging competition. It mixes a rather classical NLP competition with a puzzle component. The puzzle (aka magic) comes from some hidden preprocessing of text before labeling. Without this the competition would probably have been a bit boring.</p> <p><strong>Don't overfit</strong></p> <p>Early experiments clearly convinced me that overfitting will be the main issue here. Danger showed via the high variability of cross validation scores when you change how folds are defined. <a href="/cdeotte">@cdeotte</a> had clearly documented this in a post here as well. One way out is to augment data ( via additional external data, or via pseudo labeling, or using some data augmentation techniques). None of the ones I tried really helped (see below). Another, complementary option, is to bag many runs. By the law of large numbers the variance of scores decreases. I therefore decided to settle on training models using 7 different sets of 5 folds. Folds are stratified by sentiment and by proportion of cases here selected text equals full text. From that I compute 2 metrics: average of the 7 CV scores, and CV score when I average all predictions. In the latter case, each sample prediction is the blend of 7 out of fold predictions. An improvement of these two metric translates to a LB improvement with a noise of 0.002 at most. I only submitted if there was a significant improvement in at least one of the metrics, and only kept the modification if LB was improved too.</p> <p><strong>Efficient code</strong></p> <p>To make this practical I optimized the code. Training roberta-base on 4/5 of the train data takes less than 2 minutes per epoch on a V100 GPU. Key was to adapt batch datasize. Basically, I truncate input size to the largest of the unpadded input in the batch. My inferencing script shows how to do it: <a href="https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365">https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365</a></p> <p>Inferencing is also very efficient, it takes about 14 seconds to predict the roberta-base model on public test data using Kaggle P100 GPU.</p> <p>Issue though is that CV and LB dropped compared to fixed size. I came back to same level of performance by masking all logits for padding. This made prediction insensitive to padding length.</p> <p>A key component was the use of huggingfaces tokenizers library. I started from <a href="/abhishek">@abhishek</a> code for that.</p> <p><strong>The model</strong></p> <p>I ended up with a simple CNN head on top of robert and roberta large models. There is one CNN head for start logits and one CNN head for end logits. The head for start logits is a convolution on last hidden state for a token and the last hidden state of the previous token in the input sequence. Goal is to detect the transition from a previous token not linked to the sentiment to a token linked to the sentiment. Similarly, end logits are convolutions between a token and its following token.</p> <p>I experimented with more complex heads but they led to overfitting.</p> <p>I also tried many huggingfaces models, and in the end only kept roberta-base and roberta-large-squad2. </p> <p><strong>Training settings</strong></p> <p>I tried various optimizers and schedulers but ended up with keeping <a href="/abhishek">@abhishek</a> settings except i ran it for 5 epochs. Adding 2 epochs gave me 0.002 LB and CV uplift. in short, settings are: huggingfaces AdamW optimizer with learning rate of 3e-5, and linear schedule without warmup. I used early stopping to only keep the best checkpoint.</p> <p>Roberta large overfits easily on this dataset. I had runs with CV score of 0.1 or below. I tried various regularization (see below) but in the end used a very simple way: if jaccard score of first epoch is less than 0.695 then restart with a different pytorch seed on the same fold. This led to a CV average of 0.7195, a blend cv of 0.726, and a public LB of 0.726, the last day evening...</p> <p>I trained roberta-base with my 7x5 folds, but also trained it on full data with 7 different pytorch seeds. For these I kept the checkpoints in epochs 3, 4, and 5 given the best checkpoints when running cross validation was always in these epochs. Averaging their weights as in swa was on my todo list, I think it is a bit better than averaging the predictions.</p> <p>I used smoothed cross entropy loss. I seemed a bit better and more stable than cross entropy.</p> <p><strong>Data prep</strong></p> <p>I should have spent more time experimenting variants earlier. I stick to <a href="/abhishek">@abhishek</a> join/split prepossessing till last day. This looked like a great idea but it prevented models from using patterns of repeated spaces. As we all know it now, this was the basis of "magic" post processing.</p> <p>When I switched to raw data CV and Lb improved by 0.007...</p> <p>The only prepossessing I have in the end is to replace '�' with a quote, and to replace tokens for repeated punctuation with repeated tokens for single punctuation. You can see it in <a href="https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365">the inferencing script</a> as well. </p> <p>For the input I followed SQUAD logic; concatenate a questions with its answer. The answer is the input text. The question is of the form "What X sentiment?" where X is the input sentiment: "positive", "negative", or "neutral". This was better than just having X as in public notebooks. I think, but I can't prove it, that using a question made models fine tuned on SQUAD2 a bit more effective.</p> <p><strong>Ensembling</strong></p> <p>Predictions are ensembled very simply: logits for a given token are averaged, then I look for the best sum of start logits and end logits with the constraint that start is before end.</p> <p>I tried using softmax before addition, result is very close. I don't know which is best therefore I stick to the one that runs faster.</p> <p><strong>Uploading</strong></p> <p>I trained my models locally hence had to upload trained weights to Kaggle. I saved model weights in half precision, i.e. save after <code>model.half()</code>. It halves upload time. Predictions on test data wre the same than when using original weigths, except for one row only. I also used public datasets for my weights so that I don't hit the 20GB overall limit. Sharing weights is not risky as long as other Kaggle users don't have your model implementation. Edit: <a href="/christofhenkel">@christofhenkel</a> pointed out that weights can reveal quite a lot about the model architecture. I'll be more careful next time.</p> <p><strong>Results</strong></p> <p>Last day submissions results:</p> <p>0.721 public, 0.725 private: 7x5 folds roberta-base, </p> <p>0.724 public, 0.724 private: 7x5 folds roberta-base + 7x3 full train robert-base</p> <p>0.726 public, ?? private: 7x5 folds roberta-large-squad2 (offline prediction)</p> <p>0.723 public, 0.725 private: 4x5 folds roberta-large-squad2 </p> <p>0.727 public, 0.726 private: 4x5 folds roberta-large-squad2 + 7x3 full train robert-base (*)</p> <p>I also have some late submission to see what I could get with more uploaded weights. </p> <p>0.727 public, 0.726 private: 4x5 folds roberta-large-squad2 + 7x3 full train robert-base + 7x5 folds roberta-base + (*)</p> <p>0.725 public, 0.727 private: 6x5 folds roberta-large-squad2 + 7x3 full train robert-base (*)</p> <p>The ones with (*) are obtained using my <a href="https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365">public scoring kernel</a>.</p> <p><strong>Failures</strong></p> <p>Here is a list of what I tried that did not improve both CV and LB. Note that these were tested only on preprocessed data. Maybe some would work on raw data. Also, some of the items below did help with earlier, weaker, models.</p> <ul> <li>other models than roberta-base and roberta-large-squad2. This includes bert, bert twitter, electra, xlm-roberta-base, roberta-base-squad2, deep pavlov robert. Issue is I had to write code to blend logits at char level given these models don't use the same tokenizer.</li> <li>auxiliary targets. I tried sentiment, and full. The last one is 1 when selected text equals text. I even tried to use the full logit as an attention to modulate start and end logits.</li> <li>Pseudo labellng: on valid data, on public test data, on all test data. It improved when my LB was 0.717 or below, but didn't improve anymore with better models.</li> <li>hard sample mining.</li> <li>token or sentence mixup</li> <li>token swaps</li> <li>token masking</li> <li>mixout. Actually this worked: it was the only way I could get roberta-large to never overfit. But restarting is 0.002 better.</li> <li>loss clipping</li> <li>more complex heads</li> <li>adversarial embeddings</li> <li>noising embeddings</li> <li>fixing target shift in input data (opposite of magic post processing)</li> <li>my version of span loss</li> <li>using attention layers. Idea was to see if tokens that attended the sentiment token the most would not be the selected ones. This is a bit similar to the unsupervised learning approach described by <a href="/cdeotte">@cdeotte</a> </li> <li>using translated text back and forth. I used the marianmt models from huggingfaces for translation.</li> <li>focal loss, jaccard loss, binary cross entropy</li> </ul> <p><strong>Takeaway</strong></p> <p>I said it many times in the past but here it was literally true: one can improve till the very last minute. I failed a lot, but I learned a lot. And in the end enough things worked to get a result I am happy with.</p> <p>Cherry on the cake: I miss gold here but moved back to top discussion rank :D</p>
M5 Forecasting - Accuracy
Top 4% Solution (221st) - Four Insights
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Private Score: .63026</p> <p><a href="https://github.com/npa02012/kaggle_walmart_sales">Github Repo</a></p> <p>Written in R using primarily data.table and LightGBM.</p> <h2>Insight 1</h2> <p>I first want to give credit to the following notebook: <a href="https://www.kaggle.com/mayer79/m5-forecast-attack-of-the-data-table">https://www.kaggle.com/mayer79/m5-forecast-attack-of-the-data-table</a></p> <p>I used a similar FE set, however, my strategy for calculating the features was quite different.</p> <p>Namely, I kept my data wide and used column names to calculate features. For me this approach is much more intuitive and easier to read. Take for example the code to calculate rolling means: <code> # Get lagged rolling means <br> lags = c(7, 28) windows = c(7, 28) for(l in lags) { for(w in windows) { cols = paste0('d_', (pred_day_num - l - w + 1):(pred_day_num - l)) col_name = paste0('rmean_', l, '_', w) X[, (col_name) := d_sales_train[, rowMeans(.SD), .SDcols = cols]] } } </code> Suppose we are predicting Day 1914. For the lag-7/window-7 feature, we take the row-means of the ('d_1907', 'd_1906', ..., 'd_1901') columns. </p> <p>A total of 21 features were used. Price features were not used (lack of time to implement). My feature engineering functions can be found in <a href="https://github.com/npa02012/kaggle_walmart_sales/blob/master/scripts/fe_functions.R">/scripts/fe_functions.R</a></p> <h2>Insight 2</h2> <p>Keeping the day numbers (i.e. 1 --&gt; 1969) as a predictor variable helped my public score a lot. Also, a very low learning_rate (e.g. .01) actually reduced my score quite significantly.</p> <p>Both of these insights puzzled me - I did not have the time to look more closely into them.</p> <h2>Insight 3</h2> <p>I made three models. The only difference between the models is the random selection of validation rows for training. I held out days 1914 --&gt; 1941 for a final validation set in all cases. The public scores for the 3 models were .59, .60, and .61. The final public score was .59.</p> <p>I do believe creating 3 very similar models helped boost my score. In theory, I am centering around the mean of all possible predictions (across the different validation sets) using that model structure.</p> <h2>Insight 4</h2> <p>I set aside a significant amount of time to ensure my final submission was sound. This is mostly why I held out Days 1914 --&gt; 1941 from the training set - the slight decrease in information was worth the increased certainty that my evaluation predictions were as intended.</p> <p>I also did some simple plotting and distribution checks to make sure everything looked normal before making my final submission.</p>
COVID19 Global Forecasting (Week 4)
Week 1 Winners
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The Week 1 competition concluded on Thursday and congratulations are in order for OsciiArt, PDD, Roger Li, and other top finishers. OsciiArt in particular put together a very nice notebook showing how to use lightgbm effectively with both provided and external data, and it has become a great benchmark for all four weeks. In general I recall that first week being quite hectic and it is impressive to look back and see the quality of models put together in such a short time frame.</p>
COVID19 Global Forecasting (Week 5)
Week 1 Winners
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The Week 1 competition concluded on Thursday and congratulations are in order for OsciiArt, PDD, Roger Li, and other top finishers. OsciiArt in particular put together a very nice notebook showing how to use lightgbm effectively with both provided and external data, and it has become a great benchmark for all four weeks. In general I recall that first week being quite hectic and it is impressive to look back and see the quality of models put together in such a short time frame.</p>
COVID19 Global Forecasting (Week 4)
PDD Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my teammates @dott1718 and @christofhenkel for collaborating throughout the four competitions. As even the last week competition is already close to half-time in terms of scoring, we want to give a short overview over out solutions. We are really happy with our results across the four weeks, specifically as more optimistic models (predicting fewer cases and fatalities) performed better and our most pessimistic models in week 2 performed the worst.</p> <p><strong>Week 1</strong></p> <p>Everyone went into this competition with a lot of fear and uncertainty about covid-19 and I think this also reflects in the models that were produced. Very little data was available and most countries were in earlier periods of severe exponential growth. We quite quickly figured that utilizing any ML model does not seem appropriate here due to lack of historical data. We thus crafted the basis for our main model that we utilized throughout all four weeks. For predicting confirmed cases, the model uses a constant growth rate that is reduced by a decay that is accelerating over time. For fatalities, we multiply predicted cases of a given time lag with a fatality rate for a given country, and also add a constant growth.</p> <p>This model allowed us some flexibility also in choice of (hyper-)parameters. We ended up deciding on one optimistic and one pessimistic submission in week 1, where the optimistic one has lower growth rates and higher acceleration of the decay of this growth rate and the pessimistic one has it the other way around. For countries where we already have "sufficient" data above a certain threshold, we learn the growth over a certain time period, and for others we employ default growth rates.</p> <p>Optimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479</a> Pessimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633</a></p> <p>The optimistic sub predicted globally 2.6 mio cases after 30 days and 120k fatalities after 30 days as of 22nd of April. The pessimistic one was way more doomsday and predicted 24 mio cases and 430k fatalities. The optimistic one ended 2nd on private LB and scored 1.07716 while the pessimistic one scored still 1.29940 and would have been 5th rank. This also gives you a good idea of the problems of this metric, because even if you are off that far globally from the actual numbers, if you get some low-case countries right, the metric improves drastically.</p> <p>Now looking back, I have to say that our global estimation here was really, really precise. I just checked worldometer data, and on 23nd of April the actual confirmed cases were <strong>2,722,000</strong> and what we predicted was <strong>2,650,252</strong>. I would have not thought that we can be so precise there, even if individual countries can be off. </p> <p><strong>Week 2</strong></p> <p>Even more exponential short-term growth was observed in the pre-ceding week. We mostly re-utilized our models from week 1 and adapted them a bit and overfitted them thankfully on short-term growth patterns which made them quite pessimistic in retrospective. The second submissions blends a power law fit with exponential cutoff to the model and it is actually our better sub here. </p> <p>Sub 1: <a href="https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684">https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684</a> Sub 2: <a href="https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930">https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930</a></p> <p><strong>Week 3</strong></p> <p>We saw that machine learning solutions slowly start to shine, specifically for short-term predictions and decided to one the one hand further tune our model and sub it, and as a second submission blend some of the public ML solutions in. In detail, we used subs from @cpmpml (+team), @gaborfodor and @osciiart - so thanks to them! We heavily overweighted our model and the one from oscii in the final blend, so the weights were 8-8-1-1.</p> <p>Single model sub: <a href="https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736">https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736</a> Blend sub: <a href="https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753">https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753</a></p> <p>Our blend sub predicts 3,668,603 confirmed cases for 7th of May and 295,911 fatalities which is ~10 days in the future. I think globally this solution will also come quite close to reality. We are also currently ranked 3rd in the competition with trending upwards, even though the gap to 2nd might be too large.</p> <p><strong>Week 4</strong></p> <p>Again, we decided to improve our model and blend it with other solutions. The big change we did to our model was that we do not rely for fatality predictions anylonger on predictions on confirmed cases but rather also take the individual fatality curves for future predictions. We also blend our model now with solutions from @osciiart, @rohanrao and @david1013. Our two subs this time have different weights for the blends, usually overweighting our model. We also fully rely on our model predictions for Chinese countries.</p> <p>Sub 1: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839</a> Sub 2: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928</a></p> <p>Sub 1 predicts globally on 14th of May 3,898,447 cases and 278,848 fatalities while sub 2 predicts 3,802,472 and 224,623 respectively. In general it seems that public subs tend to agree way more on cases and fatalities seems to drive most of the score differences. We are currently positioned 5th, but trending down a bit. Will be interesting to see how scores evolve.</p> <p><strong>Learnings</strong></p> <p>I am writing this write-up while hearing the news that Austria (the country I live in) removes many of the restrictions in place from May 1st onward. We have had one of the best curve flattenings globally, even though having huge exponential growth in the beginning and we quite overpredicted Austria from the start thankfully. The country implemented strong restrictions very early on and I believe they have helped a ton. This is something that we can learn from the most I feel like. Active cases (from worldemeters) looks like that for Austria:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F198aec01a918b144c789672f5e97d980%2FSelection_041.png?generation=1588072057446301&amp;alt=media" alt=""></p> <p>I really hope that this trend will not reverse again, many other countries also look that promising and we can only hope for a bright future.</p> <p>Predicting these pandemics is very hard, mostly due to lack of data, very large uncertainty, difference in reporting, differences in public behavior, and so forth. Still, I believe the models produced in these competitions can be helpful in the future, even if it is only due to our better understanding how hard it is to predict it. A lot of personal judgement flows into these models and predictions, making them also very subjective and thus combining multiple models and opinions seems to be also worth-wile to follow up on in future.</p>
COVID19 Global Forecasting (Week 5)
PDD Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my teammates @dott1718 and @christofhenkel for collaborating throughout the four competitions. As even the last week competition is already close to half-time in terms of scoring, we want to give a short overview over out solutions. We are really happy with our results across the four weeks, specifically as more optimistic models (predicting fewer cases and fatalities) performed better and our most pessimistic models in week 2 performed the worst.</p> <p><strong>Week 1</strong></p> <p>Everyone went into this competition with a lot of fear and uncertainty about covid-19 and I think this also reflects in the models that were produced. Very little data was available and most countries were in earlier periods of severe exponential growth. We quite quickly figured that utilizing any ML model does not seem appropriate here due to lack of historical data. We thus crafted the basis for our main model that we utilized throughout all four weeks. For predicting confirmed cases, the model uses a constant growth rate that is reduced by a decay that is accelerating over time. For fatalities, we multiply predicted cases of a given time lag with a fatality rate for a given country, and also add a constant growth.</p> <p>This model allowed us some flexibility also in choice of (hyper-)parameters. We ended up deciding on one optimistic and one pessimistic submission in week 1, where the optimistic one has lower growth rates and higher acceleration of the decay of this growth rate and the pessimistic one has it the other way around. For countries where we already have "sufficient" data above a certain threshold, we learn the growth over a certain time period, and for others we employ default growth rates.</p> <p>Optimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479</a> Pessimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633</a></p> <p>The optimistic sub predicted globally 2.6 mio cases after 30 days and 120k fatalities after 30 days as of 22nd of April. The pessimistic one was way more doomsday and predicted 24 mio cases and 430k fatalities. The optimistic one ended 2nd on private LB and scored 1.07716 while the pessimistic one scored still 1.29940 and would have been 5th rank. This also gives you a good idea of the problems of this metric, because even if you are off that far globally from the actual numbers, if you get some low-case countries right, the metric improves drastically.</p> <p>Now looking back, I have to say that our global estimation here was really, really precise. I just checked worldometer data, and on 23nd of April the actual confirmed cases were <strong>2,722,000</strong> and what we predicted was <strong>2,650,252</strong>. I would have not thought that we can be so precise there, even if individual countries can be off. </p> <p><strong>Week 2</strong></p> <p>Even more exponential short-term growth was observed in the pre-ceding week. We mostly re-utilized our models from week 1 and adapted them a bit and overfitted them thankfully on short-term growth patterns which made them quite pessimistic in retrospective. The second submissions blends a power law fit with exponential cutoff to the model and it is actually our better sub here. </p> <p>Sub 1: <a href="https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684">https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684</a> Sub 2: <a href="https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930">https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930</a></p> <p><strong>Week 3</strong></p> <p>We saw that machine learning solutions slowly start to shine, specifically for short-term predictions and decided to one the one hand further tune our model and sub it, and as a second submission blend some of the public ML solutions in. In detail, we used subs from @cpmpml (+team), @gaborfodor and @osciiart - so thanks to them! We heavily overweighted our model and the one from oscii in the final blend, so the weights were 8-8-1-1.</p> <p>Single model sub: <a href="https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736">https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736</a> Blend sub: <a href="https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753">https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753</a></p> <p>Our blend sub predicts 3,668,603 confirmed cases for 7th of May and 295,911 fatalities which is ~10 days in the future. I think globally this solution will also come quite close to reality. We are also currently ranked 3rd in the competition with trending upwards, even though the gap to 2nd might be too large.</p> <p><strong>Week 4</strong></p> <p>Again, we decided to improve our model and blend it with other solutions. The big change we did to our model was that we do not rely for fatality predictions anylonger on predictions on confirmed cases but rather also take the individual fatality curves for future predictions. We also blend our model now with solutions from @osciiart, @rohanrao and @david1013. Our two subs this time have different weights for the blends, usually overweighting our model. We also fully rely on our model predictions for Chinese countries.</p> <p>Sub 1: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839</a> Sub 2: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928</a></p> <p>Sub 1 predicts globally on 14th of May 3,898,447 cases and 278,848 fatalities while sub 2 predicts 3,802,472 and 224,623 respectively. In general it seems that public subs tend to agree way more on cases and fatalities seems to drive most of the score differences. We are currently positioned 5th, but trending down a bit. Will be interesting to see how scores evolve.</p> <p><strong>Learnings</strong></p> <p>I am writing this write-up while hearing the news that Austria (the country I live in) removes many of the restrictions in place from May 1st onward. We have had one of the best curve flattenings globally, even though having huge exponential growth in the beginning and we quite overpredicted Austria from the start thankfully. The country implemented strong restrictions very early on and I believe they have helped a ton. This is something that we can learn from the most I feel like. Active cases (from worldemeters) looks like that for Austria:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F198aec01a918b144c789672f5e97d980%2FSelection_041.png?generation=1588072057446301&amp;alt=media" alt=""></p> <p>I really hope that this trend will not reverse again, many other countries also look that promising and we can only hope for a bright future.</p> <p>Predicting these pandemics is very hard, mostly due to lack of data, very large uncertainty, difference in reporting, differences in public behavior, and so forth. Still, I believe the models produced in these competitions can be helpful in the future, even if it is only due to our better understanding how hard it is to predict it. A lot of personal judgement flows into these models and predictions, making them also very subjective and thus combining multiple models and opinions seems to be also worth-wile to follow up on in future.</p>
COVID19 Global Forecasting (Week 4)
Some ML, A lot of judgement and luck
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Intro</h1> <p>The title may not seem very inspiring when you try to solve something so serious, but it is what worked semi-OK (for the first 2 weeks at least ) in this competition series( given a large horizon). I think predictions of that length (e.g 30 days ahead) require a lot of guessing and luck if you are not an expert epidemiologist (and quite possibly even if you are one too).</p> <p>There are so many things which are unknown when you try to predict that far ahead, like:</p> <p>Will the vaccine come out? (ok this one might not really be the case here - I think we are still early for that ) or an efficient drug? Are the people really following the social distancing rules or are they going to? what rules will the governments put in place? How strict will the government be at enforcing them? How will the infection rate differ based on how people socialise in different countries/areas? etc.</p> <p>Needless to say that the data is not perfect.</p> <p>Short term predictions (maybe within 3-5 days or so) could be modelled reasonably well. I hope you don't mind me <a href="https://www.h2o.ai/blog/modelling-currently-infected-cases-of-covid-19-using-h2o-driverless-ai/">sharing a blog</a> that summarises some ideas on how this could be done for short horizons on the same underlying data for active cases - many of the tricks mentioned there were applied here as well particularly in the <strong>pre-processing of the data</strong> and <strong>post-processing</strong> of the predictions . </p> <p>I still believe that an ML practitioner in tandem with an experienced epidemiologist as a combo can work better than any individual on her/his own, particularly if you try to make predictions at scale for many countries/counties/cities at the same time. </p> <p>The inference kernel (with commented code for training ) <a href="https://www.kaggle.com/kazanova/script-with-commented-train-counts-for-zeros?scriptVersionId=32084418">for week 4 is here</a></p> <p>I have the same version with <a href="https://www.kaggle.com/kazanova/fork-of-script-with-commented-train-counts-for-zer?scriptVersionId=32113592">training included here</a> :</p> <p>(It scores a tiny bit differently 0.02549 versus 0.02551 than my uploaded models although I have used same seeds - maybe because I run on a windows laptop - or different lightgbm version - not sure) . </p> <h1>The ML part</h1> <p>The Power Growth model has worked quite well to map epidemics in the past and I think many people have made good implementations here. </p> <p>I have tried a simple implementation myself - the problem was that adjusting the growth was challenging without expert knowledge (at least for me)- so so I tried to model growth specifically. Instead of predicting counts, or log counts, what I tried to predict was:</p> <p>" with what value do I need to multiply the previous cumulative value every day". For weeks 2,3, I modelled 60 separate models. One model for each plus+x day for the confirmed and fatalities. </p> <p>I used <a href="https://lightgbm.readthedocs.io/en/latest/">lightgbm </a>to build the models.</p> <p>The features were : - In how many days ago was X cumulative count reached. Where X for confirmed cases was <code>[1,5,10,20,50,100,250,500,1000]</code> days and for fatalities <code>[1,2,5,10,20,50]</code> - up to 10 lags of 3-size-window average rates for both confirmed and fatalities - the actual confirmed and fatalities as row counts for day 0.</p> <p>The target is the average-3-day rate for each one of the +x days. So instead of modelling the rate of <code>future1/current</code>, i took the average of <code>future1/current + future2/future1 + future3/future2</code> when predicting the target for day+1. That is because it has many spikes, so the average smoothness the target and (I think) controls for overfitting. </p> <p>For week 2, I only validated through observation - no cv. For week 3 I validated based on the week 2 results and for week 4 on both week2 and 3. A <strong>fundamental FLAW</strong> of both week2 and week3 models is that they ignore the 0s. The model requires an initial value of &gt;=1 to add the growth , so I expect the models down the stretch to do much worse (unless I am lucky - both me but most importantly the world and we don't have new deaths). The rules I used to initialise values for the zero cases were poor (to say the least) . </p> <p>In week 4, I have added a model that tries to predict the 0 cases modelling the difference in log counts between days .The model setup and features are the same, the only difference is that instead of only rates, the model also uses difference of log counts for the same windows. In that model I have also added USA county level data - i saw a tiny improvement from adding that (probably because it is heavily weighted towards 0 as the cumulative values are generally smaller). Other than that, I have found no other external resource useful (for longer term predictions). I have tried adding geographical data and recoveries with no success. </p> <p>Sadly these models are not very powerful - but maybe not too bad either. I would have been much further down without <strong>post processing</strong> and I would be drifting down very heavily. </p> <p>The problem with this approach is that it completely ignores the metric and a small error in the rate prediction can make a series go berserk. I saw quite a few series that for example started with 20 confirmed cases and my model was predicting it will reach to more than 30k (which was obviously a bit too dangerous to accept). Another problem is that the models are heavily influenced by what happened in China - it is the area that has the most future points available (to compute the target). In China the treatment of the outbreak was handled very differently than other areas and generally the models expected a much quicker plateauing (and different initial growth rates) than what is actually happening in many other parts of the world. This is where <strong>post-processing</strong> helped.</p> <h1>Post Processing</h1> <p><strong>Before I elaborate on this - I heavily applaud and congratulate people that have modelled this without much judgement or post-processing. They have definitely added much more than this solution ever will</strong>. I found not doing the post processing very challenging to get a highly competitive score (for the first 2 weeks at least). </p> <p>Back to the post-processing, this is the most important part of my solution. The ML model gave a curve to use as a basis and make decisions on it. The model was not too bad on cases with stable growth rates and counts of over 5,000 (approx). I created certain rules after observing most of the series individually. <strong>All the decisions I made on the post processing rules, were based on judgement</strong> - nothing coded or something that could be reused as a logic safely without observing the series. To manage time, I have ignored cases with larger counts (which is very bad from an epidemiological point of view ) subject to RMSLE (that I generally don't like for this problem) .</p> <p>The reasoning behind the rules: </p> <ul> <li><p>"first_10_then_1_" : This rule was mainly used in China provinces for cases that still had some minor increments. it applies rule (e.g decay) in the first 10 days and then it stops the growth. I anticipated that even if there are some new cases , China is now very efficient at containing the outbreak with the surveillance measures and the health app system.</p></li> <li><p>"_last_12_linear_inter_" : I am an optimistic person. if for some reason the model was not picking up that the rate at some point needs to start decreasing , I was forcing a linear decay in the last 12 days. I have no idea how this will play out. because we have not reached that point yet in any of the competitions. for example if the rate at day 19 is 1.12. In day 20 it will become 1.11 , then 1.10, etc. I do not have a great scientific explanation to tell you why I do this, other than I compared some of the curves against some others from the Chinese provinces at similar count levels and saw that 18 days was around the average the rates were still going up. Maybe not a great assumption given it was based on a geographical area only - but this is all I had at that point.</p></li> <li><p>" decay_x" : This rule is the most common one. I apply various decays in the predicted rates. A lot of it is based on expectations and comparing series with what happened to other countries/states when the counts were at similar levels , preferably close in geography. I also used my knowledge of what rules/measures have been applied to the countries when making these judgements . </p></li> <li><p>"combination of all the above" : you can apply any decay on the first 2 rules</p></li> <li><p>"accelerate" :Being an optimistic person, generally, I have avoided using that. It is the opposite of decay. </p></li> <li><p>"stay_same": rate=1 - mostly applied to Chinese Provinces, low count countries, Diamond princess.</p></li> <li><p>"normal" : Don't change the rate - trust the model as is</p></li> <li><p>"fatality" : Rule applies only on fatalities - it excludes confirmed. For week 2 model, I completely ignored fatalities when made judgements. From week 3 onward, I started making special judgements for fatalities.</p></li> </ul> <p>Now that I have a sample of making these decisions - when the competition is over, I am hopping to find a pattern that can be applied without judgement. E.g evaluate the judgements and find what would have been the correct decay. </p> <p>For the last week. I created a simple file that has the predictions of the new model without applying most of the rules versus the predictions of the model of the previous week and saw how over/under I was. I used this to aid my decisions. I have a screenshot attached.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F111640%2F27591e50f9509eba9c71e2813c745fad%2FComparing_predictions.png?generation=1587056981562287&amp;alt=media" alt=""></p> <h1>Luck/guessing</h1> <p>I have taken many guesses in this competition. For example:</p> <ul> <li>China won't generally have new fatalities or many new confirmed cases especially in the smaller provinces. Same with Diamond Princess. </li> <li>Tibet will remain with that single 1 confirmed case . If this goes to 1000 -, maybe I am out of the competition.</li> <li>For 2,3 weeks that generally 0 fatalities -will remain 0 unless confirmed cases exceed 400. </li> <li>News and Dashboards for recent information and trends (from what people have posted on the forums)</li> <li>Countries that enter later are more prepared/cautious about the virus and are better equipped to deal with this, since they have an idea for what measures might be more effective and/or how serious this can be. I have applied stronger decays to countries that enter now. </li> </ul> <p>Happy to answer any questions. </p>
COVID19 Global Forecasting (Week 5)
Some ML, A lot of judgement and luck
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Intro</h1> <p>The title may not seem very inspiring when you try to solve something so serious, but it is what worked semi-OK (for the first 2 weeks at least ) in this competition series( given a large horizon). I think predictions of that length (e.g 30 days ahead) require a lot of guessing and luck if you are not an expert epidemiologist (and quite possibly even if you are one too).</p> <p>There are so many things which are unknown when you try to predict that far ahead, like:</p> <p>Will the vaccine come out? (ok this one might not really be the case here - I think we are still early for that ) or an efficient drug? Are the people really following the social distancing rules or are they going to? what rules will the governments put in place? How strict will the government be at enforcing them? How will the infection rate differ based on how people socialise in different countries/areas? etc.</p> <p>Needless to say that the data is not perfect.</p> <p>Short term predictions (maybe within 3-5 days or so) could be modelled reasonably well. I hope you don't mind me <a href="https://www.h2o.ai/blog/modelling-currently-infected-cases-of-covid-19-using-h2o-driverless-ai/">sharing a blog</a> that summarises some ideas on how this could be done for short horizons on the same underlying data for active cases - many of the tricks mentioned there were applied here as well particularly in the <strong>pre-processing of the data</strong> and <strong>post-processing</strong> of the predictions . </p> <p>I still believe that an ML practitioner in tandem with an experienced epidemiologist as a combo can work better than any individual on her/his own, particularly if you try to make predictions at scale for many countries/counties/cities at the same time. </p> <p>The inference kernel (with commented code for training ) <a href="https://www.kaggle.com/kazanova/script-with-commented-train-counts-for-zeros?scriptVersionId=32084418">for week 4 is here</a></p> <p>I have the same version with <a href="https://www.kaggle.com/kazanova/fork-of-script-with-commented-train-counts-for-zer?scriptVersionId=32113592">training included here</a> :</p> <p>(It scores a tiny bit differently 0.02549 versus 0.02551 than my uploaded models although I have used same seeds - maybe because I run on a windows laptop - or different lightgbm version - not sure) . </p> <h1>The ML part</h1> <p>The Power Growth model has worked quite well to map epidemics in the past and I think many people have made good implementations here. </p> <p>I have tried a simple implementation myself - the problem was that adjusting the growth was challenging without expert knowledge (at least for me)- so so I tried to model growth specifically. Instead of predicting counts, or log counts, what I tried to predict was:</p> <p>" with what value do I need to multiply the previous cumulative value every day". For weeks 2,3, I modelled 60 separate models. One model for each plus+x day for the confirmed and fatalities. </p> <p>I used <a href="https://lightgbm.readthedocs.io/en/latest/">lightgbm </a>to build the models.</p> <p>The features were : - In how many days ago was X cumulative count reached. Where X for confirmed cases was <code>[1,5,10,20,50,100,250,500,1000]</code> days and for fatalities <code>[1,2,5,10,20,50]</code> - up to 10 lags of 3-size-window average rates for both confirmed and fatalities - the actual confirmed and fatalities as row counts for day 0.</p> <p>The target is the average-3-day rate for each one of the +x days. So instead of modelling the rate of <code>future1/current</code>, i took the average of <code>future1/current + future2/future1 + future3/future2</code> when predicting the target for day+1. That is because it has many spikes, so the average smoothness the target and (I think) controls for overfitting. </p> <p>For week 2, I only validated through observation - no cv. For week 3 I validated based on the week 2 results and for week 4 on both week2 and 3. A <strong>fundamental FLAW</strong> of both week2 and week3 models is that they ignore the 0s. The model requires an initial value of &gt;=1 to add the growth , so I expect the models down the stretch to do much worse (unless I am lucky - both me but most importantly the world and we don't have new deaths). The rules I used to initialise values for the zero cases were poor (to say the least) . </p> <p>In week 4, I have added a model that tries to predict the 0 cases modelling the difference in log counts between days .The model setup and features are the same, the only difference is that instead of only rates, the model also uses difference of log counts for the same windows. In that model I have also added USA county level data - i saw a tiny improvement from adding that (probably because it is heavily weighted towards 0 as the cumulative values are generally smaller). Other than that, I have found no other external resource useful (for longer term predictions). I have tried adding geographical data and recoveries with no success. </p> <p>Sadly these models are not very powerful - but maybe not too bad either. I would have been much further down without <strong>post processing</strong> and I would be drifting down very heavily. </p> <p>The problem with this approach is that it completely ignores the metric and a small error in the rate prediction can make a series go berserk. I saw quite a few series that for example started with 20 confirmed cases and my model was predicting it will reach to more than 30k (which was obviously a bit too dangerous to accept). Another problem is that the models are heavily influenced by what happened in China - it is the area that has the most future points available (to compute the target). In China the treatment of the outbreak was handled very differently than other areas and generally the models expected a much quicker plateauing (and different initial growth rates) than what is actually happening in many other parts of the world. This is where <strong>post-processing</strong> helped.</p> <h1>Post Processing</h1> <p><strong>Before I elaborate on this - I heavily applaud and congratulate people that have modelled this without much judgement or post-processing. They have definitely added much more than this solution ever will</strong>. I found not doing the post processing very challenging to get a highly competitive score (for the first 2 weeks at least). </p> <p>Back to the post-processing, this is the most important part of my solution. The ML model gave a curve to use as a basis and make decisions on it. The model was not too bad on cases with stable growth rates and counts of over 5,000 (approx). I created certain rules after observing most of the series individually. <strong>All the decisions I made on the post processing rules, were based on judgement</strong> - nothing coded or something that could be reused as a logic safely without observing the series. To manage time, I have ignored cases with larger counts (which is very bad from an epidemiological point of view ) subject to RMSLE (that I generally don't like for this problem) .</p> <p>The reasoning behind the rules: </p> <ul> <li><p>"first_10_then_1_" : This rule was mainly used in China provinces for cases that still had some minor increments. it applies rule (e.g decay) in the first 10 days and then it stops the growth. I anticipated that even if there are some new cases , China is now very efficient at containing the outbreak with the surveillance measures and the health app system.</p></li> <li><p>"_last_12_linear_inter_" : I am an optimistic person. if for some reason the model was not picking up that the rate at some point needs to start decreasing , I was forcing a linear decay in the last 12 days. I have no idea how this will play out. because we have not reached that point yet in any of the competitions. for example if the rate at day 19 is 1.12. In day 20 it will become 1.11 , then 1.10, etc. I do not have a great scientific explanation to tell you why I do this, other than I compared some of the curves against some others from the Chinese provinces at similar count levels and saw that 18 days was around the average the rates were still going up. Maybe not a great assumption given it was based on a geographical area only - but this is all I had at that point.</p></li> <li><p>" decay_x" : This rule is the most common one. I apply various decays in the predicted rates. A lot of it is based on expectations and comparing series with what happened to other countries/states when the counts were at similar levels , preferably close in geography. I also used my knowledge of what rules/measures have been applied to the countries when making these judgements . </p></li> <li><p>"combination of all the above" : you can apply any decay on the first 2 rules</p></li> <li><p>"accelerate" :Being an optimistic person, generally, I have avoided using that. It is the opposite of decay. </p></li> <li><p>"stay_same": rate=1 - mostly applied to Chinese Provinces, low count countries, Diamond princess.</p></li> <li><p>"normal" : Don't change the rate - trust the model as is</p></li> <li><p>"fatality" : Rule applies only on fatalities - it excludes confirmed. For week 2 model, I completely ignored fatalities when made judgements. From week 3 onward, I started making special judgements for fatalities.</p></li> </ul> <p>Now that I have a sample of making these decisions - when the competition is over, I am hopping to find a pattern that can be applied without judgement. E.g evaluate the judgements and find what would have been the correct decay. </p> <p>For the last week. I created a simple file that has the predictions of the new model without applying most of the rules versus the predictions of the model of the previous week and saw how over/under I was. I used this to aid my decisions. I have a screenshot attached.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F111640%2F27591e50f9509eba9c71e2813c745fad%2FComparing_predictions.png?generation=1587056981562287&amp;alt=media" alt=""></p> <h1>Luck/guessing</h1> <p>I have taken many guesses in this competition. For example:</p> <ul> <li>China won't generally have new fatalities or many new confirmed cases especially in the smaller provinces. Same with Diamond Princess. </li> <li>Tibet will remain with that single 1 confirmed case . If this goes to 1000 -, maybe I am out of the competition.</li> <li>For 2,3 weeks that generally 0 fatalities -will remain 0 unless confirmed cases exceed 400. </li> <li>News and Dashboards for recent information and trends (from what people have posted on the forums)</li> <li>Countries that enter later are more prepared/cautious about the virus and are better equipped to deal with this, since they have an idea for what measures might be more effective and/or how serious this can be. I have applied stronger decays to countries that enter now. </li> </ul> <p>Happy to answer any questions. </p>
Plant Pathology 2020 - FGVC7
33rd->27th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everyone. Many thanks to Kaggle and CVPR for holding this nice competition. we are very lucky to win the 33th place. we were concerned about data balance and overfitting,so share our solution: Agumentation: A.IAAFliplr(p=0.5), A.IAAFlipud(p=0.5), A.FancyPCA(alpha=0.1, always_apply=False, p=0.5), A.OneOf([ A.IAAEmboss(p=1.0), A.IAASharpen(p=1.0), A.Blur(p=1.0), ], p=0.5), A.OneOf([ A.ElasticTransform(p=1.0), A.IAAPiecewiseAffine(p=1.0) ], p=0.5), A.Normalize(mean=RGB_MEAN, std=RGB_STD, p=1.0), ToTensorV2(p=1.0), we used so many agumentation because we don't have time to experiment with what works Backbone:efficientnet-b6,efficientnet-b5,We changed the model: model1:cbam_EfficientNet_B6 model2:cbam_EfficientNet_B5 features=model1._fc.in_features + model2._fc.in_feature we also add 'senet' block but it is not works Image sizes : 350,350 loss : CBLoss(not works),Categorical Cross Entropy+label smooth other:smote(Oversampling),5-folds,4*TTA final:ensemble other model Public Leaderboard 0.979-&gt;0.981,Private Leaderboard 0.975-&gt;0.976 one more thing: Do not trust the LB, trust your CV. You can ensemble many many models,but it is not necessarily.</p>
COVID19 Global Forecasting (Week 4)
Linear Regression Is All you Need (9th solution week 4)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I said I'll describe my solution if it ended up in top 10, so here it is. </p> <h1>Acknowledgements</h1> <p>First of all, I'd like to thank Kaggle for this challenging series of forecasting competitions. I say challenging for two reasons. First, this is about people being sick and people dying. It is hard to abstract from that dire reality. Second, one week sprints are too short for me. I need more time to immerse in a given machine learning problem. In this case it took me 4 weeks and 2 days to find a model I was happy with.</p> <p>I also want to thanks my teams mates for first 3 weeks, Ahmet and Giba. They preferred to go solo last week but I would have been happy to stick as a team. We used very different models (NN for Ahmet, xgb/lgb for Giba, lr for me) with he hope that together it would work well. It did actually in week2 but we didn't select it at the last minute because of some cv issue. It would have landed us at rank 8th unless mistaken. Week 3 was a bit disappointing for us. </p> <p>I also warmly thank those who curated the additional datasets I used, especially Vopani. Using these made a difference.</p> <p>Last but not least I want to congratulate those who did consistently well in these competitions, and especially those who did well in week 4. <a href="/david1013">@david1013</a> seems to have great models, even when not using an extra day of data if I get his recent posts correctly. It would be interesting to know Lockdown and Kaz model score when not using an extra day as well. Paulo Pinto performance in week 4 is awesome. I can't name all but let me mentions PDD team who did well in several weeks too. </p> <h1>Modeling Goal</h1> <p>I decided early on to not approach this as I usually build forecasting models. Reason is we are modeling a physical phenomenon where people get infected, then infect others for some time, then either recover or die. I therefore studied <a href="https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology">compartmental models used in epidemiology</a>, SIR and the like. These models rely on two time series: cases and recoveries/deaths. If we have accurate values for both then we can fit these models and get reasonably accurate predictions. </p> <p>Issue is we don't have these series. </p> <p>For cases we have a proxy, confirmed cases. This is a proxy in many ways: - It depends on the testing policy of each geography. Some test a lot, and confirmed cases are close to all cases. - A large fraction of sick people are asymptomatic, hence are easily missed by testing. - Testing does not happen when people get infected or contagious, it often happens with a delay. For all these reasons the confirmed case nubers we get is a distorted view of actual cases.</p> <p>For fatalities the numbers aren't accurate either; - In some geos we only get deaths test at hospital, in other geos it includes fatalities from nursing homes. - We don't have recoveries data. The latter can be fixed by grabbing recovery data from other online source. This has been done by some top competitors, I wish I had done it.</p> <p>Despite all these caveat, I assumed that we still have some form of SIR model at play with the two series we have at hand: fatalities depend on cases detected some while ago. That led to my first model.</p> <h1>Fatalities Prediction Via Linear Regression</h1> <p>If we assume that fatalities are a proportion of cases from, say, 8 days ago, then the proportion can be computed from log scaled data. See examples below for some countries. Green curve is the difference between confirmed cases (blue) and fatalities 8 days later (orange)</p> <p>Italy:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fd840d692feb332ba0ba753ab56577def%2Fitaly.png?generation=1585856510539884&amp;alt=media" alt=""></p> <p>France:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F148776fdca0ba1cc6dde63dbc7c720b9%2Ffrance.png?generation=1585856534028494&amp;alt=media" alt=""></p> <p>Spain: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F4a4e692a327677c254dd0a6a402caff1%2Fspain.png?generation=1585856554679533&amp;alt=media" alt=""></p> <p>Then running a linear regression on log of past cases as input, and log of fatalities as output should capture this dependency. I published a simplified version in <a href="https://www.kaggle.com/cpmpml/fatalities-prediction-via-linear-regression">a notebook</a>. The above pictures are taken from it.</p> <p>The fact that the proportion depends on the geography can be captured via additional input features. In practice I found that log fatalities increase is a linear regression of cases from between 15 to 5 days ago. These limits were tuned via cross validation. They match what researchers say about Covid.</p> <h1>Cross Validation</h1> <p>We used time based cross validation for our models. We selected 3 to 5 folds, depending on the week competition. Each fold uses data until a given data for training models, and data after for validation. One fold was defined by the actual test data: validation starts at the first day of test data, even though train and test data overlap.</p> <p>Models in our team were trained to predict next day. To predict on a longer period we append the prediction to train data, then apply to model on it shifted by one day to get the prediction for the second day. We repeat for all remaining days in validation or test. Other teams trained one model per prediction day. I didn't thought of it but it is definitely doable given how fast training my model is.</p> <h1>Predicting cases</h1> <p>For cases I used an auto regressive model, i.e. linear regression using past values as input. That model was not that bad, but it turned out to be pessimistic. The flattening of curves we observed wasn't captured correctly. I spent the last 2 weeks thinking about how to model this. I tried to revisit the epidemiology models, but couldn't find a way to reliably fit them for all geos. I didn't want to add an arbitrary decay, or fit a curve either. I therefore turned back to data exploration. And the day before deadline I found that a log of the decay in daily cases was very interesting, see some plots below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fdf16b94f837e4ed920730f7375df613a%2Fweek4.png?generation=1587030089243556&amp;alt=media" alt=""></p> <p>It means that daily cases decrease linearly in the log space. It means that the number of daily cases is of the form N - exp(a t) where N is a starting population, a a constant, and t the number of days. I explain this derivation <a href="https://www.kaggle.com/c/covid19-global-forecasting-week-5/discussion/143791">here</a>.</p> <p>The model is therefore fitting a linear model on the log of cases over time. </p> <p>This model works well in geos where the number of cases is high, but fails miserably in geos where epidemic starts. I therefore blended it with the previous one, with weights depending on the number of cases a while ago, 12 days if I remember well. This model improved CV scores tremendously compared to the previous one. It was modeling curve flattening rather well, at last.</p> <p>I then tried the same with fatalities: They indeed decrease linearly in the log space, but the data is sparse: I could fit a linear regression reliably only for geos with a rather large number of cases. Fortunately for people and unfortunately for my model, there were not that many geos like this. Then I realized I could use the same trick as in my very first model: log of fatalities depend linearly on past log cases. I could therefore fit a linear regression on past log cases as input, and log of fatalities as output. I used the same past period, i.e. cases between 15 to 5 days ago more or less (I don't remember the exact figures). My fatalities model is therefore a blend of the original linear model from past cases, a linear regression fit on past log fatalities, and a linear regression fit on past log cases. </p> <p>That's what I submitted. </p> <h1>Post Deadline</h1> <p>I am rather pleased to see that a simple blend of linear models was competitive. I wasn't sure I could handle differences in geos via adaptive weights when blending, but it seems it was good enough.</p> <p>After one day of rest after deadline I realized I could have done a better job. If fatalities depend on the slope of the log of cases, and if that slope is constant, then why estimate it on days from 15 to 5 days before, and not on days from, say 10 days to current day? Using more recent data is always better, then why not here? I implemented this small change and used late submission. This improved model score was similar to the one I submitted in time until last week where it started to outperform it. Its final private Lb score is 0.41840. Bottom line is it took me 4 weeks and 2 days to get the right model...</p> <h1>Week 5</h1> <p>I contemplated entering week 5 but didn't for 3 reasons. One reason is I was bothered by the extra day data from world odometer. I saw people who used it in week 4 ask for it to be forbidden. It would have been a smart move IMHO. Another reason, more important, is that a one week sprint with changes in competition setting was too short for me. Last reason may be even more important. I think my model works better when there are high level of cases and fatalities. The inclusion of smaller grained geos like US counties probably mean that my model would not be a great fit.</p> <p>I admire those who entered week 5 and I'm looking forward to their writeups when the competition completes!</p>
COVID19 Global Forecasting (Week 5)
Linear Regression Is All you Need (9th solution week 4)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I said I'll describe my solution if it ended up in top 10, so here it is. </p> <h1>Acknowledgements</h1> <p>First of all, I'd like to thank Kaggle for this challenging series of forecasting competitions. I say challenging for two reasons. First, this is about people being sick and people dying. It is hard to abstract from that dire reality. Second, one week sprints are too short for me. I need more time to immerse in a given machine learning problem. In this case it took me 4 weeks and 2 days to find a model I was happy with.</p> <p>I also want to thanks my teams mates for first 3 weeks, Ahmet and Giba. They preferred to go solo last week but I would have been happy to stick as a team. We used very different models (NN for Ahmet, xgb/lgb for Giba, lr for me) with he hope that together it would work well. It did actually in week2 but we didn't select it at the last minute because of some cv issue. It would have landed us at rank 8th unless mistaken. Week 3 was a bit disappointing for us. </p> <p>I also warmly thank those who curated the additional datasets I used, especially Vopani. Using these made a difference.</p> <p>Last but not least I want to congratulate those who did consistently well in these competitions, and especially those who did well in week 4. <a href="/david1013">@david1013</a> seems to have great models, even when not using an extra day of data if I get his recent posts correctly. It would be interesting to know Lockdown and Kaz model score when not using an extra day as well. Paulo Pinto performance in week 4 is awesome. I can't name all but let me mentions PDD team who did well in several weeks too. </p> <h1>Modeling Goal</h1> <p>I decided early on to not approach this as I usually build forecasting models. Reason is we are modeling a physical phenomenon where people get infected, then infect others for some time, then either recover or die. I therefore studied <a href="https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology">compartmental models used in epidemiology</a>, SIR and the like. These models rely on two time series: cases and recoveries/deaths. If we have accurate values for both then we can fit these models and get reasonably accurate predictions. </p> <p>Issue is we don't have these series. </p> <p>For cases we have a proxy, confirmed cases. This is a proxy in many ways: - It depends on the testing policy of each geography. Some test a lot, and confirmed cases are close to all cases. - A large fraction of sick people are asymptomatic, hence are easily missed by testing. - Testing does not happen when people get infected or contagious, it often happens with a delay. For all these reasons the confirmed case nubers we get is a distorted view of actual cases.</p> <p>For fatalities the numbers aren't accurate either; - In some geos we only get deaths test at hospital, in other geos it includes fatalities from nursing homes. - We don't have recoveries data. The latter can be fixed by grabbing recovery data from other online source. This has been done by some top competitors, I wish I had done it.</p> <p>Despite all these caveat, I assumed that we still have some form of SIR model at play with the two series we have at hand: fatalities depend on cases detected some while ago. That led to my first model.</p> <h1>Fatalities Prediction Via Linear Regression</h1> <p>If we assume that fatalities are a proportion of cases from, say, 8 days ago, then the proportion can be computed from log scaled data. See examples below for some countries. Green curve is the difference between confirmed cases (blue) and fatalities 8 days later (orange)</p> <p>Italy:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fd840d692feb332ba0ba753ab56577def%2Fitaly.png?generation=1585856510539884&amp;alt=media" alt=""></p> <p>France:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F148776fdca0ba1cc6dde63dbc7c720b9%2Ffrance.png?generation=1585856534028494&amp;alt=media" alt=""></p> <p>Spain: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F4a4e692a327677c254dd0a6a402caff1%2Fspain.png?generation=1585856554679533&amp;alt=media" alt=""></p> <p>Then running a linear regression on log of past cases as input, and log of fatalities as output should capture this dependency. I published a simplified version in <a href="https://www.kaggle.com/cpmpml/fatalities-prediction-via-linear-regression">a notebook</a>. The above pictures are taken from it.</p> <p>The fact that the proportion depends on the geography can be captured via additional input features. In practice I found that log fatalities increase is a linear regression of cases from between 15 to 5 days ago. These limits were tuned via cross validation. They match what researchers say about Covid.</p> <h1>Cross Validation</h1> <p>We used time based cross validation for our models. We selected 3 to 5 folds, depending on the week competition. Each fold uses data until a given data for training models, and data after for validation. One fold was defined by the actual test data: validation starts at the first day of test data, even though train and test data overlap.</p> <p>Models in our team were trained to predict next day. To predict on a longer period we append the prediction to train data, then apply to model on it shifted by one day to get the prediction for the second day. We repeat for all remaining days in validation or test. Other teams trained one model per prediction day. I didn't thought of it but it is definitely doable given how fast training my model is.</p> <h1>Predicting cases</h1> <p>For cases I used an auto regressive model, i.e. linear regression using past values as input. That model was not that bad, but it turned out to be pessimistic. The flattening of curves we observed wasn't captured correctly. I spent the last 2 weeks thinking about how to model this. I tried to revisit the epidemiology models, but couldn't find a way to reliably fit them for all geos. I didn't want to add an arbitrary decay, or fit a curve either. I therefore turned back to data exploration. And the day before deadline I found that a log of the decay in daily cases was very interesting, see some plots below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fdf16b94f837e4ed920730f7375df613a%2Fweek4.png?generation=1587030089243556&amp;alt=media" alt=""></p> <p>It means that daily cases decrease linearly in the log space. It means that the number of daily cases is of the form N - exp(a t) where N is a starting population, a a constant, and t the number of days. I explain this derivation <a href="https://www.kaggle.com/c/covid19-global-forecasting-week-5/discussion/143791">here</a>.</p> <p>The model is therefore fitting a linear model on the log of cases over time. </p> <p>This model works well in geos where the number of cases is high, but fails miserably in geos where epidemic starts. I therefore blended it with the previous one, with weights depending on the number of cases a while ago, 12 days if I remember well. This model improved CV scores tremendously compared to the previous one. It was modeling curve flattening rather well, at last.</p> <p>I then tried the same with fatalities: They indeed decrease linearly in the log space, but the data is sparse: I could fit a linear regression reliably only for geos with a rather large number of cases. Fortunately for people and unfortunately for my model, there were not that many geos like this. Then I realized I could use the same trick as in my very first model: log of fatalities depend linearly on past log cases. I could therefore fit a linear regression on past log cases as input, and log of fatalities as output. I used the same past period, i.e. cases between 15 to 5 days ago more or less (I don't remember the exact figures). My fatalities model is therefore a blend of the original linear model from past cases, a linear regression fit on past log fatalities, and a linear regression fit on past log cases. </p> <p>That's what I submitted. </p> <h1>Post Deadline</h1> <p>I am rather pleased to see that a simple blend of linear models was competitive. I wasn't sure I could handle differences in geos via adaptive weights when blending, but it seems it was good enough.</p> <p>After one day of rest after deadline I realized I could have done a better job. If fatalities depend on the slope of the log of cases, and if that slope is constant, then why estimate it on days from 15 to 5 days before, and not on days from, say 10 days to current day? Using more recent data is always better, then why not here? I implemented this small change and used late submission. This improved model score was similar to the one I submitted in time until last week where it started to outperform it. Its final private Lb score is 0.41840. Bottom line is it took me 4 weeks and 2 days to get the right model...</p> <h1>Week 5</h1> <p>I contemplated entering week 5 but didn't for 3 reasons. One reason is I was bothered by the extra day data from world odometer. I saw people who used it in week 4 ask for it to be forbidden. It would have been a smart move IMHO. Another reason, more important, is that a one week sprint with changes in competition setting was too short for me. Last reason may be even more important. I think my model works better when there are high level of cases and fatalities. The inclusion of smaller grained geos like US counties probably mean that my model would not be a great fit.</p> <p>I admire those who entered week 5 and I'm looking forward to their writeups when the competition completes!</p>
University of Liverpool - Ion Switching
9th place short summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Short summary of our solution. </p> <p>1) 'magic' shift 2.724 for 11class batches and deleting of 7th batch from train. It was our first "discovery" in this competition</p> <p>2) combination&nbsp;of some wiener filters and bandstop filter for signal cleaning. It helped a lot for simple models (GMM 0.941 on CV, Naive Bayes 0.942 on CV) but unfortunately it had no any effect for our wavenet. So we just used it for our GMM and afterwards GMM weighted prediction and GMM probabilities were used as features for our NN</p> <p>3) cleaning of 50hz noise</p> <p>What we've tried but it didn't help:</p> <p>1) HMM sampling 2) blending/stacking 3) TCNN 4) CRF</p>
M5 Forecasting - Accuracy
Rank 172, sharing code for the first time.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Referring to the code of Konstantin Yakovlev , the holiday features of each store are made based on Konstantin Yakovlev, and the final 28 day forecast is obtained by weighted fusion of by store and by state.<br> Code link: <a href="https://github.com/Greak-1124/Kaggle_M5_Forecasting_Accuracy_Pro" target="_blank">https://github.com/Greak-1124/Kaggle_M5_Forecasting_Accuracy_Pro</a></p>
iWildCam 2020 - FGVC7
LB #2 Documentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Apologies for the late submission documentation, and thanks to the organisers of the competition - this was great fun.</p> <p>After attempting many different approaches my highest scoring solution was generally as follows:</p> <ol> <li>Removed noisy and/or 'human' class categories - 'start', 'end', 'unknown', and 'unidentifiable'. Retaining the empty class seemed important for reducing activation on background to generalise better.</li> <li>Encorporated iNat 2017/2018 data for shared classes.</li> <li>Ran MegaDetector v4 on all images.</li> <li>Took only animal detection crops with confidence &gt;0.3 and discarded detections which were too small. Also discarded crops within the same image if they were much less confident than the highest.</li> <li>Made sure to expand height of crop to be 224 pixels, even if the bounding box was smaller, then used reflection padding to square the crop and resized to 224x224 resolution.</li> <li>Applied various data augmentations including CLAHE, hue shift, gaussian noise, cutout, scale/rotate/shift, brightness/contrast adjustments and grayscale.</li> <li>Created a custom classifier head which incorporated metadata from the image. This used cos/sin representations of of the time of year and time of day, and information regarding the crop (width, height, pitch, yaw).</li> <li>Trained for 7 epochs with a single EfficientNetB4 model pretrained with imagenet noisystudent weights, using label-smoothed cross-entropy focal loss. This was 2 hours of training.</li> <li>Used TTA with every training augmentation except for noise to get predictions.</li> <li>Combined the predictions for each crop within an image to get a total average for the image.</li> <li>Averaged these predictions across images which were clustered by time and location for the final submission. This was only done as the sequence ID labels appeared to be unreliable.</li> <li>Achieved 0.930 public, 0.903 private LB score. The highest submission with this approach which I didn't pick was 0.908 private.</li> </ol> <p>This was my first proper experience with computer vision so I unfortunately ran out of time to try a lot of the things I wanted due to my slowness :). I wanted to try: - Ensembling different EfficientNets at different input resolutions. - Using the location multisat data to create a geo prior with which to augment the prediction. I had initially used this data directly in a single classifier model but it too easily overfit. - Creating a more realistic infrared data augmentation rather than relying on grayscale. - Using a siamese network to distinguish between the most confused species. - Reintroducing the 'vehicle'/'human' megadetector crops with a prior or separate classifier. </p>
Plant Pathology 2020 - FGVC7
1st place solution(single model)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all. Many thanks to Kaggle and the organizers for holding this competition. And special thanks to my teammates.</p> <h2>Problems with the data</h2> <p>We noticed that there are different labels on the same images, for example, Train_379 and Train_1173 are exactly the same image while they have different labels. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2186594%2Fcb794f440db244c11e0e78966e2202ae%2Fimage.png?generation=1590583887495151&amp;alt=media" alt=""> Some images in the training dataset are generated by one image, but they have different labels. We can tell from the details of image Train_171 and Train_1 that they are generated by the same one image, but are labelled differently.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2186594%2F7e3d1d198e843a907b52e61b0a9a4e14%2Fimage1.png?generation=1590583924522781&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2186594%2Fa953717fab66a2fe1ce37bc3bb646691%2Fimage2.png?generation=1590583935937220&amp;alt=media" alt=""> Therefore, we mainly want to solve the problem of noisy labels in the training dataset. We used the knowledge distillation method, first train a 5-fold models and get out-of-fold results about valid dataset, and then mix the out-of-fold results and ground truth by 3: 7 as the labels of a new training model.</p> <h2>Model</h2> <h3>Data augmentation</h3> <ul> <li>Brightness, Contrast</li> <li>Blur</li> <li>Flip</li> <li><p>Shift, Scale, Rotate</p> <h3>Backbone</h3></li> <li><p>seresnextnet50</p> <h3>Imagesize</h3></li> <li><p>320x512</p> <h3>loss</h3></li> <li><p>Cross entropy loss</p> <h3>Inference</h3></li> <li><p>5-fold average and 5 times TTA</p></li> </ul> <h3>Others:</h3> <p>The amount of data in this competition is very small and the data distribution is imbalanced. A wrong classification of a sample with multiple disease categories will have significant impact on the final results. So, don’t trust the public leaderboard, trust your CV.</p>
Tweet Sentiment Extraction
42nd place submission | jumped 880 places = lucky seed?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My public LB score was 0.712 and, when the private leaderboard was released, I jumped 880 spots with a private LB score of 0.719. This was surprising indeed! Because my work relied heavily on <a href="/cdeotte">@cdeotte</a>'s, I kept his original seed of 777. </p> <p>My submission was very similar to the BERTweet public notebook I shared a few weeks ago ( <a href="https://www.kaggle.com/davelo12/bertweet">https://www.kaggle.com/davelo12/bertweet</a> ). This submission scored a 0.718 private LB (silver position) and 0.709 CV.</p> <p>The only change I made to this workbook was to use predictions from RoBERTa instead of BERTweet when there was an unknown token (i.e., the word was missing from the BERTweet vocabulary). This 42nd place submission scored a 0.719 private LB, 0.712 public LB, and 0.709 CV.</p> <p>I elected to use BERTweet over RoBERTa, because I found BERTweet to be more stable (i.e., lower standard deviation across folds). I found that both BERTweet and RoBERTa performed similarly in terms of CV and public LB.</p> <p>Special thanks to <a href="/cdeotte">@cdeotte</a> for his public notebook. I learned a great deal from this. Congrats to all of the winners! </p>
M5 Forecasting - Accuracy
[Place 99] A few insights (no multiplier)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi,</p> <p>as expected we had a huge shakeup in the leaderboard. I just wanted to share my insights:</p> <p>Multiplier: I did not use multipliers because I think they have no practical use.</p> <p>Memory: I had huge problems with the memory restrictions and the amount of data because I transformed the data in the long-format. That´s why I changed to Google Colab as operating system. I used Kaggle just for parallel data processing. I only used day 900+ for training. Besides pragmatic considerations, the upward trend in the mean value was an indicator that our time series is not stationary. Therefore, I think it is easier for our model to detect patterns in the data if the targets are within a range.</p> <p>Day-by-day: I chose a day-by-day model despite worse scores, because I was afraid of continuing a possible wrong trend with a recursive model. Since we only had one submission, I chose a conservative approach. I have saved a separate training dataset for each day to reduce the amount of data.</p> <p>Features: Features - Lags: - Rolling Lags (7, 14 - I just realized that i forgot to add the other rolling lags as 28 and 180 etc. in my final model.. ) - Lags (1-7)</p> <p>Features - Prices: - Price Max, Min, Std, Mean - New Item (Yes/No - I created 20 categories and labeled the last two as Yes) - Time since release - Difference of each price to the mean price - Price normalization - Unique prices for each item - Max, Min, Std, Mean saisonal prices (for each year) - Difference of each price to the saisonal mean price - Rolling price last 15 days - Difference to this average</p> <p>Features - Calendar: - Day, week and month of the year - Day of the week - Week of the month - Month of the year - End of the week yes/no - Start/end of the month yes/no - Start/end of the quarter yes/no - Start/end of the year yes/no - Lags to the events (Many people shop the day before ;) )</p> <p>Mean Encoding: - Id cat, Id dept, Id item mean encoding - Id cat, Id dept, Id item std encoding</p> <p>Loss/objective: - Standard RMSE and tweedie since I didn´t get anything else working well for me</p> <p>Things which didn´t work for me: - Target transformation: dividing each sales for each item by the mean value of the item - Target transformation: using revenue as target and revenue as lags and dividing it afterwards by the sell price - Target transformation: Normalization of each target by item id - Target transformation: Using the mean of the last month as multiplier for the other sales</p> <p>I have invested so much time in the target transformation to make the time series stationary. I wanted to test with the deltas (t - t-1), but unfortunately I didn't have any more time, because I joined very late (<a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/155950">https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/155950</a>).</p> <p>I just wanted to say thank you <a href="/kyakovlev">@kyakovlev</a> for the very valuable input over the whole competition period. You did an awesome job! And a special thanks to the people who answered my questions in such detail: <a href="/chrisrichardmiles">@chrisrichardmiles</a> <a href="/tobiit">@tobiit</a> <a href="/miguelpm">@miguelpm</a> </p> <p>I really enjoyed this competition. Best wishes!</p>
Jigsaw Multilingual Toxic Comment Classification
16th Place Solution: Simple Calibrations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to winner! Thanks to Kaggle team, organizers, educative and interesting notebooks ( @riblidezso , @shonenkov , @jazivxt ). That was my first nlp competition, here I acquainted with SOTA models in NLP (GloVe/FastText + LSTMs -&gt; Transformer -&gt; BERT -&gt; XLM-ROBERTa, excited journey). Take a gold medal will be too good to be true (I am the one who lost the most positions in top10), but anyway I glad to take my silver medal (the only one my medal 😃 ).</p> <h1>Things that worked for me</h1> <ol> <li>XLM-ROBERTa with LSTM + MAXPOOLING head lead to <strong>0.9328 private</strong> (0.9346 public)</li> <li>Ensemble models from 1st step <strong>0.9453 private</strong> (0.9474 public)</li> <li>Find bot comments by popular templates <strong>0.9458 private</strong> (0.9481 public)</li> <li>Make simple calibration to the every lang in test set (convert pd from 3rd step to logit and then add some constants for every lang to logit, so it occurs that I need to add positive constant for fr and es, negative to it and pt), such additions give my final <strong>0.9482 private</strong> (0.9505 public) </li> </ol> <p>As I can see now I was overfitted from 2nd step on ensembles, didn't find good CV strategy</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
Lightgmb crop-wise solution (public 0.9)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Crop-wise model</h2> <p>I think my approach is different from approaches presented in public kernels, because I did not use an end-to-end model that makes predictions throughout the slide. Instead I trained network on single 512x512 crops from level 0 to predict ISUP-grade. Of course, it is impossible to accurately predict the class using only one crop from slide, but this way you can get a model that gives out good crop embeddings without troubles with batch-size/resolution. For training I use effnetb5 with softmax activation and crops with cancer area more then 10%(any type of cancer).</p> <h2>Aggregation part</h2> <p>After training I split the entire slide into crops(left only those in which the proportion of tissue was above 10%) and separately predict embeddings and probabilities. I use embeddings and probabilities to aggregate some features for lightgbm model: - element-wise statistics on embeddings(min, max, std, median) - statistics on probas - the number of crops with a certain class - attention-like features: obviously, not all crops are equally important for predicting the ISUP. So, instead of mean, i use weighted sum where weight is sum of ISUP probabilities(I have it from my effb5 crop-wise model). Therefore, crops containing only healthy tissue will have a low weight, while crops containing a lot of cancerous tissue will have a high weight.</p> <p>I used out of the box lightgbm classification model to predict final slide-level probabilities. Since I trained the crop-wise model in whole train, there was a leak in the features, but since strong augmentations were used, it was insignificant. Instead of regression, the final prediction was sum(i*p_i) with [0.5, 1.5, 2.5, 3.5, 4.5] threasholds(i - the i'th ISUP class, p_i - the lightgbm probability of this class) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F744554%2F25836385b85525212c3acd86ba8e8459%2Farch.png?generation=1595457060891999&amp;alt=media" alt=""></p> <h2>Training dataset modifications</h2> <p>As I know, there was some mistakes in training dataset, so i tried to fix them: If I have, for example slide with 4+5, I can add embeddings from another (4+5) slide, and new sub-slide will have the same 4+5 class. Furthermore 4+5 means that class 4 have highter area then class 5, but pathologists could be wrong in determining areas, so i can add 4+4 class from another slide to make my current slide more specific. Also I can add 0+0 tissue to every slide and it will not be harmful I used this augmentation-like techniques to expand dataset for 200000 rows(instead of default 10000) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F744554%2F489fd9a4595a8973800f1ba367aaf59d%2Fdata_aug.png?generation=1595457693350028&amp;alt=media" alt=""></p> <h2>Embedding TTA</h2> <p>During inference I can make predictions 100 times(get for example random 90% of embeddings) and average probabilities. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F744554%2F3de0730d0fff89bac6f719e4094f6cab%2Ftta.png?generation=1595458209610605&amp;alt=media" alt=""></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
This is a complete fiasco - N place solution [17th Public]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Despite the final standing, I'd like to share this post anyway..</strong> *<em>Everything below was written before this collapse was revealed lol</em>*</p> <p>First thanks to the hosts for making this competition, the most unpredictable and intriguing one in my experience.</p> <p>Additionally, congratulations to the winners and thanks to all kagglers who competed hard and made me worry about staying in public top10 all the time. Unfortunately, around 2 last weeks before the competition ends all new ideas stopped working for me (but not for many of others), so my concerns about top10 became reality.. So, it was tough but fun!</p> <p>One more thanks goes to the Dutch service provider HOSTKEY (<a href="https://www.hostkey.com/gpu-servers">https://www.hostkey.com/gpu-servers</a>) that granted me with the free GPU server access during this challenge. This really helped me to investigate and to experiment more broadly .</p> <h2><strong>Details:</strong></h2> <p>In the end my solution is quite simple one and based only on the classification, without using the masks. My 2 final solutions are the average and voting ensembles of several models, all trained mostly on the single fold (in fact I added only 2 other models from different folds to increase a bit the stability) with the D4 TTA.</p> <p><em><strong>Data:</strong></em> a) Remove several suspicious samples (somewhere from discussions), rearrange similar samples to the same folds (didn't bring too much) &amp; 5-folds b) I used only medium resolution and tiles 256x256 (I was really upset when the combination 36x256x256 was announced on the forum lol) c) I used slightly different strategy of the tiles sampling : if number of tiles in the image is less than 36 -&gt; add missing tiles by random selection of existing ones. Otherwise, select 36x1.3 = 46 most informative tiles (to engage more available information), take always first 24 of them. And always take random 12 from the rest. It was considered as an additional "data augmentation / generalisation" method. <strong><em>Augmentations:</em></strong> Hflip, Vflip, RandRotate, RandBrightnessContrast, ShiftScaleRotate <strong><em>Networks architecture:</em></strong> a) At some point I found that have almost no impact of heavy and fancy backbones (i was also quite limited in resources to train with big BS), so stayed mostly with ResNet18/34 with AdaptiveAvgPool2d. (Included one good ResNeXt50 model in the ensemble) b) I extended the baseline networks with additional convolutional block and spatial attention (this constantly improved my CV but made convergence longer). Before this block tensors were reshaped back to the "tile-level" as [BS, C, H*sqrt(N_tiles), W*sqrt(N_tiles)] c) Gated Attention module after the Avgpool - most important customisation on the architecture level that always helped. d) <strong><em>Training:</em></strong> a) big enough batch - 24 or 32 b) Adam + ReduceOnPlateau or MultiStep c) CrossEntropy / Focal Loss d) Baseline networks ResNet-18 / 34 / ResNext50_32x4d_swsl e) warmup f) apex O1</p> <h3><strong>What didn't work (completely, almost or was the same):</strong></h3> <ul> <li>Regression</li> <li>Optimising of Gleason scores directly (N+N) - 10 classes</li> <li>Effnets / RegNets / Inception</li> <li>MaxPool / GeM</li> <li>patch sizes of 128/224/384/512</li> <li>Fancy optimisers as Radam/Ralamb/Ranger</li> <li>Custom Loss / Combination of Losses / OHEM / LabelSmoothing / HybridCappaLoss</li> <li>Stain normalisation (tried to process all training with Vahadane and Macenko from staintools)</li> <li>Highest resolution didn't improve the performance</li> <li>Tiles to the single big image</li> <li>Multi-task learning (tried: 1 - split classes 0/1 vs 2/3/4/5 -&gt; binary + multiclass on each; 2 split -&gt; binary + regression on each)</li> <li>Sequence models such as LSTM extension on the features like <a href="https://www.nature.com/articles/s41598-020-58467-9">here</a> or <a href="https://www.researchgate.net/publication/323591215_Differentiation_among_prostate_cancer_patients_with_Gleason_score_of_7_using_histopathology_image_and_genomic_data">here</a>. However, one such model was included in the final ensemble.</li> <li>I also tried to cut 4 sets of tiles with the vertical and horizontal shift of 1/2 of tiles size. Observed no improvement either by using them during the training (tried to select the most informative / to increase the tiles number / to periodically select of such different sets by epochs) or inference. I also tried to cut 4 different sets of tiles for each image and create 4 samples from each one on the fly and feed it as independent samples - no improvement</li> <li>Use only non-empty tiles / Balance all samples as 90% of non-empty + 10% always empty to standardize the input type</li> <li>Inference and averaging on the 36 "standard" tiles + most important 18 with 2 variants of shift worked quite good for many of the model on CV but didn't help on LB</li> <li>In some experiments I noticed that 36 is not always the optimal tiles number, but it didn't always work</li> <li>TTA with brightness/contrast/scaling</li> <li>Use segmentation masks (tried to use only the original masks, didn't try to pseudo-label karolinska with radboud-like type - maybe that was the key)</li> <li>Could forget to mention something else that I tried, but for sure I just didn't the golden seed!</li> </ul> <p>What was particularly annoying is that common ensembling didn't really work and often inclusion of strong single models didn't improve neither CV nor LB, but who would expect that with QWK!</p> <p>And what is surprising my best private submission (<strong>0.932</strong>) is the single model that even wasn't included in the ensemble.. </p> <p>At least i'm happy with the stable and robust solution, that gives constantly 0.91 on local CV, public and private LB.</p> <p>Cheers, peace, bisou!</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
16th place solution (all you need to know)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to thank kaggle and the organizers for hosting such interesting competition. And i like to thank my teammates <a href="/mpware">@mpware</a> , <a href="/tikutiku">@tikutiku</a>, <a href="/phoenix9032">@phoenix9032</a> and <a href="/virajbagal">@virajbagal</a> for such an interesting competition journey. And congratulations to all the winners! It's been a great competition, and my team has spend a lot of time in this competition and finally glad to share that all the hard work paid off.</p> <h1>What worked:</h1> <h2>Image Preprocessing</h2> <ul> <li>We experiment with a lot of tiling strategies including publicly shared by <a href="/iafoss">@iafoss</a> and <a href="/akensert">@akensert</a> and end up using both in our training pipeline. (25x256x256) worked best for us.</li> <li>We also randomly replaced white tiles (tiles with color avg &gt; 240 ) with other tiles and it gives boost in our CV.</li> <li><p>Removing duplicates (or adding them to only train set), and removing few noisy images as mentioned <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/151323">here</a> helped a little.</p> <h2>Augmentations</h2></li> <li><p>Our models are trained on a wide range of augmentations. Few of them are shown below: ``` transforms_train = albumentations.Compose([ albumentations.HorizontalFlip(p=0.3), albumentations.VerticalFlip(p=0.3), albumentations.Transpose(p=0.3), albumentations.RandomGridShuffle(), albumentations.OneOf([ albumentations.GaussianBlur(blur_limit=1), albumentations.MotionBlur(), ], p=0.01), ])</p> <h2>use this after initial training for 40 epochs</h2></li> </ul> <p>transforms_train_hard = albumentations.Compose([ albumentations.HorizontalFlip(p=0.5), albumentations.VerticalFlip(p=0.5), albumentations.Transpose(p=0.3), albumentations.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.2, rotate_limit=5, border_mode=cv2.BORDER_CONSTANT, p=0.1), albumentations.RandomGridShuffle(), ]) ```</p> <h2>Ensemble</h2> <p>Our final solution is a simple average of the following models. For Karolinska: <code> 1. ResNet34 2. EfficientNet-B0 3. EfficientNet-B1 </code> For Radboud: <code> 1. EfficientNet-B0 2. ResNet34 3. EfficientNet-B1 4. UNet with EfficientNet-B1 backbone, segmentation head + classification head </code></p> <h1>The short story about UNet models (MPWARE speaking):</h1> <p>Main part of the team was working on CNN approaches without masks so I decided to focus only on models involving masks to try to add more diversity to our ensemble. The idea was that segmentation should help the model to discover what does matter for our target. As masks were different between Radboud (0 to 5) and Karolinska (0 to 2), Unet models per provider were trained separately. Input was a 25x128x128 tile-based image from half medium resolution (mask built accordingly). Around 100 WSI had missing masks so it was a chance to create, with some other additional WSI, an hold-out balanced dataset to follow correlation between CV, Hold-Out and LB. If all correlate then our models could be considered stable and safe for private dataset. After a few trainings we succeeded to have CV and Hold-Out correlated and we noticed that LB was always better. We also noticed that CV were quite bad on Karolinska and quite good on Radboud. After some investigations and attempts to improve Karolinska masks, it became obvious that segmentation approach will not work for Karolinska. However, it had potential to bring boost on Radboud so more models were trained by applying random density (between 0.2 and 0.9) tile selection to cover most possible tiles configuration. This procedure provided a nice boost +0.01 but required around 128 epochs. 4 folds of such UNet/radboud models were integrated in the ensemble (using classification head only to average). Finally, it seems to have contributed to stabilize our ensemble.</p> <h1>What did not worked:</h1> <ol> <li>Removing gray backgrounds from radboud images.</li> <li>Training models with Multiple Sample Dropouts.</li> <li>Using H&amp;E Normalizations and augmentations.</li> <li>Relabelling bad labels.</li> <li>Models with heavy base like EfficientNet-B3,B4,B5, etc.</li> <li>RegNet y and x various variants 080 till 32.</li> <li>Adding data_provider as feature.</li> <li>Adding extra head to predict <code>isup_grade</code> from <code>gleason_scores</code>.</li> <li>ArcFaceloss</li> <li>Multitask learning with more weight on the Gleason Score base loss</li> </ol>
Prostate cANcer graDe Assessment (PANDA) Challenge
18th place solution: DenseNet + RNN based
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the organizers, Kaggle team and all of the participating kagglers!</p> <p>That's the first time when I get so high place! I'm really glad that all of my work done in these 3 months is rewarded. And as usual I learned a lot during this challenge!</p> <p>My solution is quite different from the concat pooling (by Iafoss) based mainstream approach.</p> <p>It is:</p> <p>0) image similarity clustering via image hashing (splitting clusters into tr/val sets, not images) 1) rotation of a whole (middle resolution) image to arbitrary angle with crop preventions 2) extraction of tissue tiles (256x256) 3) Global Contrast Normalization (across all of the extracted tiles from single image) 4) DenseNet121 backbone (imagenet pretrained) -&gt; Dense feature extractor -&gt; 2 GRU layers -&gt; single head ISUP grade regression (logcosh loss) 5) Multiple generations of discarding the "hard or wrong labelled" images by MAE&gt;2.5 threshold 6) 5-Fold CV during training, keeping the gleason_score frequencies balanced while splitting the train and validation. 7) 3 stage training: - backbone frozen, long tile sequence 64 tiles - all unfrozen, shorter tile sequence of 16 tiles - backbone frozen, long tile sequence of 64 tiles again 8) short train batch size of 2 (for regularizing effect)</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone and host of competition! I am surprised that I am get 3thd place on private leaderboard. I was just lucky 🙂 . I stopped improving my solution 1 month ago, because has problems with my GPU and didn't want to spend more credits on AWS, because I have not seen any improvement on LB and CV.</p> <p>I am experimented with different networks and my custom tile cropping, hard augmentations. But the best results I get on simple sollution based on <a href="/haqishen">@haqishen</a> <a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">kernel</a>. I am trained 2 effnetb0 and used round logits before average them, this approach give me 0.880 on public and 0.934 on private.</p> <p>Github with my experiments: <a href="https://github.com/Dipet/kaggle_panda">https://github.com/Dipet/kaggle_panda</a></p>