Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
121656 | 2 | null | 121643 | 2 | null | You describe that the correlation between survival rate and the variable "SibSp" is first increasing and then decreasing. In this case a logistic regression seems like the wrong choice. The relationship is clearly not linear.
A logistic regression means that you apply a logarithmic function to your variables. This models (some specific) non-linear relationsships but the relationsship is still monotonic, more of one variable always implies more of the other variable. This doesn't fit your example.
In this example I would suggest to bin the "SibSp" variable, that is map the numbers to a finite collection of groups (say between 0 and 1, between 1 and 2 and so forth) and then use these bins as a categorical variable in your model.
| null | CC BY-SA 4.0 | null | 2023-05-20T06:26:08.273 | 2023-05-20T06:26:08.273 | null | null | 137136 | null |
121657 | 1 | null | null | 0 | 9 | I have been trying to train a Resnet model to classify Diabetic Retinopathy images into binary classes. The dataset consists of around 35k images. The val loss and accuracy does seem to behave weirdly for me. Here's my model:
```
input_layer = Input(shape = (224,224,3))
base_model = ResNet50(include_top = False, input_tensor = input_layer, weights = "imagenet")
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(32, activation = "relu")(x)
x = Dropout(0.3)(x)
x = Dense(32, activation = "relu")(x)
# x = Dropout(0.15)(x)
out = Dense(2, activation = 'softmax')(x)
model = Model(inputs = input_layer, outputs = out)
optimizer = keras.optimizers.Adam(learning_rate = 1e-4)
es = EarlyStopping(monitor='val_loss', mode='min', patience = 5, restore_best_weights=True)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience = 3, factor = 0.5, min_lr=1e-6)
callback_list = [es]
model.compile(optimizer = optimizer, loss = "binary_crossentropy", metrics = ["accuracy"])
history = model.fit(train_batches,epochs = 30, validation_data = val_batches, callbacks = callback_list)
```
Here's the model summary:
```
conv5_block3_out (Activation) (None, 7, 7, 2048) 0 ['conv5_block3_add[0][0]']
global_average_pooling2d (Glob (None, 2048) 0 ['conv5_block3_out[0][0]']
alAveragePooling2D)
dropout (Dropout) (None, 2048) 0 ['global_average_pooling2d[0][0]'
]
dense (Dense) (None, 32) 65568 ['dropout[0][0]']
dropout_1 (Dropout) (None, 32) 0 ['dense[0][0]']
dense_1 (Dense) (None, 32) 1056 ['dropout_1[0][0]']
dense_2 (Dense) (None, 2) 66 ['dense_1[0][0]']
==================================================================================================
Total params: 23,654,402
Trainable params: 23,601,282
Non-trainable params: 53,120
__________________________________________________________________________________________________
```
I did try changing the units for my dense layers(tried till 512).Here's the loss and accuracy plot(the less epochs are due to early stopping):
[](https://i.stack.imgur.com/r2Pye.png)
What could I specifically do to have the val loss not spike?
| Validation Loss not decreasing for RESNet model | CC BY-SA 4.0 | null | 2023-05-20T07:00:52.957 | 2023-05-20T07:00:52.957 | null | null | 150039 | [
"machine-learning",
"cnn",
"image-classification"
] |
121658 | 1 | null | null | 0 | 11 | Is there a Python library that supports abstractive summarization? (Excluding cloud-based models like GPT or ChatGPT). We can perform extractive summarization easily using the code below:
```
!pip3 install transformers==4.11.3
!pip3 install summarizer
!pip3 install bert-extractive-summarizer
!pip3 install keybert
!pip3 install keybert[flair]
!pip3 install keybert[gensim]
!pip3 install keybert[spacy]
!pip3 install keybert[use]
!pip3 install spacy
!pip3 install "gensim==3.8.3"
from summarizer import Summarizer,TransformerSummarizer
GPT2_model = TransformerSummarizer(transformer_type="GPT2",transformer_model_key="gpt2-medium")
GPT2_summary = ''.join(GPT2_model(body, ratio=0.1))
print(GPT2_summary)
```
| Library for Abstractive Summarization | CC BY-SA 4.0 | null | 2023-05-20T07:18:43.777 | 2023-05-20T08:13:12.237 | null | null | 7812 | [
"transformer",
"automatic-summarization"
] |
121659 | 2 | null | 121654 | 1 | null | In the attention blocks, Keys and Values are of the same length, but Queries are not necessarily of the same length as the former. For instance, in the encoder-decoder attention blocks, the Keys and Values come from the encoder representations while the Queries come from the decoder representations (which are computed from a completely different sequence than the encoder ones).
In the Transformer model, the only factor limiting the length of the inputs is the positional embeddings (see [this answer](https://datascience.stackexchange.com/a/120602/14675) for details. In the ALiBi paper, they replace them with a non-trainable approach and, experimentally, the approach is shown to extrapolate to longer sequences than those seen during training.
| null | CC BY-SA 4.0 | null | 2023-05-20T08:05:41.090 | 2023-05-20T08:05:41.090 | null | null | 14675 | null |
121660 | 2 | null | 121658 | 0 | null | HuggingFace T5 can perform abstractive summarization. There's a really good tutorial here: [https://huggingface.co/docs/transformers/tasks/summarization](https://huggingface.co/docs/transformers/tasks/summarization) which includes finetuning, but this is optional.
| null | CC BY-SA 4.0 | null | 2023-05-20T08:13:12.237 | 2023-05-20T08:13:12.237 | null | null | 146483 | null |
121661 | 1 | null | null | 0 | 19 | I am working on a project involving the K-Means clustering algorithm, and I am trying to prove the equivalence between different formulations of the objective function. Specifically, I want to show the equivalence between the original objective function:
$$ \arg \min_{\{\mathcal{D}_k\}, \{\boldsymbol{\mu}_k\}} \sum_{k=1}^{K} \sum_{\boldsymbol{x}_i \in \mathcal{D}_k} \|\boldsymbol{x}_i - \boldsymbol{\mu}_k\|_2^2
$$
and two alternative formulations:
- $$\arg \min_{\{\mathcal{D}_k\}} \sum_{k=1}^{K} \frac{1}{|\mathcal{D}_k|} \sum_{\boldsymbol{x}_i, \boldsymbol{x}_j \in \mathcal{D}_k} \|\boldsymbol{x}_i - \boldsymbol{x}_j\|_2^2 $$
- $$\arg \min_{\{\boldsymbol{\mu}_k\}} \sum_{i=1}^{N} \min_{k} \|\boldsymbol{x}_i - \boldsymbol{\mu}_k\|_2^2$$
where:
- $\mathcal{D}_k$ represents the (k)th cluster, which contains a subset of the data points
- $\mathcal{\mu}_k$ represents the (k)th centroid.
I have been struggling to prove the equivalence between these formulations. I have attempted expanding the expressions and rearranging terms, but I haven't been able to establish the desired equivalence.
Could someone provide guidance on how to demonstrate the equivalence between these objective functions? Any help or insights would be greatly appreciated.
Thank you in advance!
| Equivalence between K-Means objective functions proof | CC BY-SA 4.0 | null | 2023-05-20T09:48:07.487 | 2023-05-20T09:48:07.487 | null | null | 150040 | [
"optimization",
"unsupervised-learning",
"k-means"
] |
121662 | 1 | null | null | 1 | 36 | I am working on a binary image classification task in which I have greyscale images of size (1, 224, 224) (all normalized between 0 and 1) and a set of labels (0 or 1). I have around 2.6k images with labels, but while training the loss is increasing instead of decreasing.
This is my model:
```
import torch.nn as nn
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
# Convolutional layers (1, 224, 224)
self.conv1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), stride=1, padding=1)
self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), stride=1, padding=1)
self.conv3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3), padding=1)
# Fully connected layers
self.fc1 = nn.Linear(in_features=64 * 28 * 28, out_features=512)
self.fc2 = nn.Linear(in_features=512, out_features=256)
self.fc3 = nn.Linear(in_features=256, out_features=32)
self.fc4 = nn.Linear(in_features=32, out_features=1)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.max_pool2d(X, 2)
X = F.relu(self.conv2(X))
X = F.max_pool2d(X, 2)
X = F.relu(self.conv3(X))
X = F.max_pool2d(X, 2)
X = X.view(X.shape[0], -1)
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = F.relu(self.fc3(X))
X = self.fc4(X)
return X
```
and this is the training loop:
```
hope = Classifier()
loss_fn = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(hope.parameters(), lr=0.001)
epochs=100
hope = hope.to(device)
loss_fn = loss_fn.to(device)
for epoch in range(epochs):
i=0
loss_c=[]
hope.train()
for image,label in iter(ctrain_loader):
image = image.view((-1,1,224,224))
pred = hope(image)
label = label.view((-1,1))
label = label.float()
pred = pred.float()
loss = loss_fn(pred,label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
hope.eval()
for image,label in iter(cval_loader):
image = image.view((-1,1,224,224))
pred=hope(image)
print(pred)
label = label.view((-1,1))
label = label.float()
pred = pred.float()
#pred = torch.sigmoid(pred)
loss=loss_fn(pred,label)
loss_c.append(loss.cpu().detach().numpy())
print(f"loss is : {np.array(loss_c).mean()}")
```
I am using a batch size of 8 for training, i.e. labels supplied have dimensions (8,1) and images are of size (8,1,224,224).
I have tried switching to BCE but no help. Also, one thing I have observed is that for all the images in a batch the prediction values are always the same (might have something to do with initializing using random weights, but I haven't seen others doing it in their code so not sure).
| Convolution neural network loss increasing instead of decreasing | CC BY-SA 4.0 | null | 2023-05-20T11:18:34.887 | 2023-05-31T23:58:20.430 | null | null | 150042 | [
"machine-learning",
"python",
"convolutional-neural-network",
"pytorch",
"computer-vision"
] |
121663 | 2 | null | 121645 | 0 | null | Please note, the consumer market is oblivious to me. I know more about SARS-CoV-2 than commonly understood consumer brands (I've heard of Coca Cola).
Therefore - because I've not a clue how this data will behave - I'd use an algorithm robust against variance, imbalance and sparse data. Thus its got to be XGBoost. Still I wouldn't be aware if a transformation would be needed. It looks like standard ordinal data to me, so no.
Question If you want to identify what products/brands are most sensitive to post-code variation and from those brand types which are connected with other brand choices:
- XGBoost - accuracy, AUC-ROC, precision/recall/F1
- feature selection
- interaction analysis.
The postcode is the training target trained against brand preference. There's going to be missing data, could be lots of missing data - it will work regardless. So like washing machines might top the list and max out on the associated weight, simply rich postcodes buy high-end machine brands, other post-codes buy economy brands. That might form an interaction with other domestic appliances like cookers and fridges.
The data type might be 'brand' so just switch my term "cooker" for a given "cooker/domestic appliance brand" (I don't know any cooker nor appliance brands).
Caveats
The concern with brand choice is there will be a frequency issue within each brand, so the variance will not be homogeneous for certain parts of the data but homogenous for other parts. I dunno, cars - post codes in certain areas are going to buy loads of cars more frequently. On the other hand, essentials like washing powder or tooth paste are probably exempt. It would need some thought.
If there are thousands of postcodes this isn't going to work, there would need to be an external criteria - like house price - to group equivalent postcodes together. If that approach was used there would need to be a control against geographic bias. If there were a limited number postcodes thats fine.
Once you've got your weights then certain consumer choices have stronger classification power in identifying a post-code, enabling targeting of a specific consumer choice in the future, targeted ads stuff like that.
| null | CC BY-SA 4.0 | null | 2023-05-20T14:23:04.473 | 2023-05-20T14:50:09.937 | 2023-05-20T14:50:09.937 | 67203 | 67203 | null |
121664 | 2 | null | 85220 | 0 | null | The squared terms in Ridge Regression means that when the [Beta] coefficients shrink to small values << 1 they become essentially/effectively zero - and thus do not need to be shrunk further.
However for Lasso and its direct/linear regularization coefficients the shrinkage will need to actually bring them all the way to zero to have the same effect on the penalty sums.
| null | CC BY-SA 4.0 | null | 2023-05-20T14:28:52.793 | 2023-05-20T14:28:52.793 | null | null | 14518 | null |
121665 | 1 | null | null | 1 | 9 | When researching online, I keep finding that Xavier/Glorot initialization is:
[](https://i.stack.imgur.com/MoMLf.png)
[](https://i.stack.imgur.com/t6e4F.png)
however, the original paper by Glorot said that this was a common initialization strategy that they soon found did not work great:
[](https://i.stack.imgur.com/kfFyB.png)
and instead they came up with a new initialization scheme:
[](https://i.stack.imgur.com/6ayU2.png)
To me, these look like two completely different schemes. The first only is only a function of the number of inputs to a layer, and the second is a function of both the number of inputs AND the number of outputs. I don't know if the normal vs uniform distribution makes up for the difference.
So my questions are:
- which one is the ACTUAL Xavier initialization?
- what are the differences between the two schemes?
- which one do I use?
- what did not work about the first scheme that prompted Glorot to come up with the second one?
| confusion with Xavier Initiliazation definition | CC BY-SA 4.0 | null | 2023-05-20T16:58:29.897 | 2023-05-20T17:09:09.083 | 2023-05-20T17:09:09.083 | 150050 | 150050 | [
"machine-learning",
"neural-network",
"weight-initialization",
"definitions",
"glorot-initialization"
] |
121666 | 1 | 121667 | null | 0 | 42 | Upon Googling "Maxpool ReLU order" or similar, I've found many people saying this order does not effect the result, i.e.:
```
MaxPool(Relu(x)) = Relu(MaxPool(x))
```
Here are a small number of examples of people saying this:
[https://stackoverflow.com/questions/35543428/activation-function-after-pooling-layer-or-convolutional-layer](https://stackoverflow.com/questions/35543428/activation-function-after-pooling-layer-or-convolutional-layer)
[https://github.com/tensorflow/tensorflow/issues/3180](https://github.com/tensorflow/tensorflow/issues/3180)
[https://www.quora.com/In-most-papers-I-read-the-CNN-order-is-convolution-relu-max-pooling-So-can-I-change-the-order-to-become-convolution-max-pooling-relu](https://www.quora.com/In-most-papers-I-read-the-CNN-order-is-convolution-relu-max-pooling-So-can-I-change-the-order-to-become-convolution-max-pooling-relu)
[https://towardsdatascience.com/convolution-neural-networks-a-beginners-guide-implementing-a-mnist-hand-written-digit-8aa60330d022](https://towardsdatascience.com/convolution-neural-networks-a-beginners-guide-implementing-a-mnist-hand-written-digit-8aa60330d022)
To be clear, I'm completely aware that there could be a slight speed difference, but what I'm asking about here is the computation result, not the speed.
For example, consider the following:
[](https://i.stack.imgur.com/j3U58.jpg)
How can the general consensus be that the ReLU/MaxPool order does not effect the computation result when it's easy to come up with a quick example where is does appear to effect the computation result?
For what it's worth, ChatGPT seems to go against the general consensus:
[](https://i.stack.imgur.com/vMWOr.png)
| Why do people say ReLU vs MaxPool order does not change the result? | CC BY-SA 4.0 | null | 2023-05-20T19:40:31.840 | 2023-05-20T20:10:44.240 | null | null | 50921 | [
"neural-network"
] |
121667 | 2 | null | 121666 | 4 | null |
- ChatGPT often speaks bullshit. Do not rely on it.
- Your example is wrong. On the second computation, you are computing the absolute value, not ReLU: ReLU(-5) = 0 and ReLU(-3) = 0. The result is 2, which is the same as the first computation.
- $\max(ReLU(x_1,...,x_n)) = ReLU(\max(x_1,...,x_n))$
| null | CC BY-SA 4.0 | null | 2023-05-20T20:10:44.240 | 2023-05-20T20:10:44.240 | null | null | 14675 | null |
121669 | 1 | null | null | 0 | 39 | I have Dataset (CSV format).
[](https://i.stack.imgur.com/JJxev.png)
My mail goal is to do named entity recognition and use algorithms that are today's SOTA, for example according to the website nlpprogress.com.
One of the SOTA is this repository: [https://github.com/ZihanWangKi/CrossWeigh/tree/master](https://github.com/ZihanWangKi/CrossWeigh/tree/master)
Now, from what I've seen for named entity recognition I need to create a BIO file format.
Which I don't have right now.
What I have in my hand is a csv with a division of fields into their respective headers.
The question is how do I create such a dataset with the appropriate tags:
B-Skill, I-SKILL, B-EDU, I-EDU, B-EXP, I-EXP
| BIO Format (Skills,Qualification,Experience) | CC BY-SA 4.0 | null | 2023-05-21T07:31:01.250 | 2023-05-26T11:12:54.497 | null | null | 150062 | [
"nlp",
"named-entity-recognition"
] |
121670 | 1 | null | null | 1 | 15 | I am studying an article about implementation of genetic algorithm. Here is the parameters which are used in this article:
[](https://i.stack.imgur.com/8VSra.png)
As I know, in roullete selection method, probability of selecting individuals is based on their cost function values. the more optimum individual has the higher probability of being selected. So, what is the meaning of selection probability here?
UPDATE
Here is the link to the article:
[Thinned Concentric Circular Array Antennas Synthesis Using Genetic Algorithm ](https://ieeexplore.ieee.org/document/6148734)
| What is the meaning of selection probability in genetic algorithm with roullete selection method | CC BY-SA 4.0 | null | 2023-05-21T08:17:45.113 | 2023-05-21T17:19:33.343 | 2023-05-21T17:19:33.343 | 143282 | 143282 | [
"genetic-algorithms"
] |
121671 | 1 | null | null | 0 | 13 | Because it computes the correlation coefficient of degrees and the correlation coefficient of constant arrays is not defined, `networkx` library returns `np.nan` value.
But if the degree assortativity coefficient measures the extent to which nodes with similar degrees are connected to each other, because all nodes have the same degree and are connected to each other, it is perfectly assortative.
| What is degree assortativity coefficient of a complete undirected graph? | CC BY-SA 4.0 | null | 2023-05-21T10:40:09.277 | 2023-05-21T10:40:41.510 | 2023-05-21T10:40:41.510 | 137206 | 137206 | [
"correlation",
"graphs",
"mathematics",
"social-network-analysis"
] |
121672 | 1 | 121674 | null | 0 | 33 | I have a dataframe with different `dtypes` like int, float, object, datatime etc. I am performing `data cleaning`, to list or find duplicate column names in the dataframe.
Duplicate criteria as below:
- same column names or
- columns having same data values
I tried using transpose approach `df.T.duplicated()` to list duplicate column names, But seems slow for big dataframe.
I come to know we can use `pivot` or `pivot_table` or `corr` to list duplicate column names.
Can someone explain how to use and interpret it Or Is there any other way to do it?
| Pandas: To find duplicate columns | CC BY-SA 4.0 | null | 2023-05-21T11:00:02.427 | 2023-05-21T11:49:29.900 | null | null | 148603 | [
"pandas",
"data-cleaning"
] |
121673 | 1 | null | null | 1 | 20 | Does cross_val_score in scikit-learn split the data consistently or randomly? I noticed that cross_val_score lacks a random_state parameter, but the documentation mentions stratified k-fold cross-validation, which is implemented in the StratifiedKFold class that does have a random_state parameter for shuffling. So, how exactly does cross_val_score work? Does it split the data in a specific order or does it shuffle it? Furthermore, is the distribution of classes in each fold the same as the distribution in the original dataset? Lastly, which class or function in scikit-learn do you use for cross-validation?
| scikit-learn cross_val_score randomness | CC BY-SA 4.0 | null | 2023-05-21T11:12:30.380 | 2023-05-21T22:50:51.937 | null | null | 149495 | [
"cross-validation"
] |
121674 | 2 | null | 121672 | 1 | null | To list duplicate columns by name in a Pandas DataFrame, you can call the [duplicated](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.duplicated.html) method on the `.columns` property:
`df.columns.duplicated()`
This should be rather fast, unless you have an enormous number of columns in your data.
Finding duplicate columns by values is a bit tricky, and definitely slower. I wouldn't use any of `pivot`, `pivot_table` or `corr`, as they are slow as well for large data sets. Of the three, `corr` would be the most straightforward to detect duplicate columns - two identical columns would have a correlation of one.
I've found this StackOverflow question to contain some ideas, and I think that this [answer](https://stackoverflow.com/a/32961145/15052008) is what you are looking for.
| null | CC BY-SA 4.0 | null | 2023-05-21T11:49:29.900 | 2023-05-21T11:49:29.900 | null | null | 142205 | null |
121675 | 1 | null | null | 0 | 3 | Does H2O AutoML parallelize different models when launched via train ? (or you should specify it somehow?)
If so, can you show an example ?
| H20 AutoML Parallelism | CC BY-SA 4.0 | null | 2023-05-21T18:54:11.637 | 2023-05-21T18:54:11.637 | null | null | 143954 | [
"python",
"parallel",
"automl",
"h2o"
] |
121676 | 1 | null | null | 1 | 32 | ```
import seaborn as sns
import pandas as pd
import numpy as nm
import matplotlib.pyplot as plt
df=sns.load_dataset('fmri')
x=df[['timepoint','subject']]
y=df['signal']
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_5388\1414350166.py in <module>
----> 1 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
~\Anaconda3\envs\MACHINELEARNING\lib\site-packages\sklearn\model_selection\_split.py in train_test_split(test_size, train_size, random_state, shuffle, stratify, *arrays)
2415 raise ValueError("At least one array required as input")
2416
-> 2417 arrays = indexable(*arrays)
2418
2419 n_samples = _num_samples(arrays[0])
~\Anaconda3\envs\MACHINELEARNING\lib\site-packages\sklearn\utils\validation.py in indexable(*iterables)
376
377 result = [_make_indexable(X) for X in iterables]
--> 378 check_consistent_length(*result)
379 return result
380
~\Anaconda3\envs\MACHINELEARNING\lib\site-packages\sklearn\utils\validation.py in check_consistent_length(*arrays)
332 raise ValueError(
333 "Found input variables with inconsistent numbers of samples: %r"
--> 334 % [int(l) for l in lengths]
335 )
336
ValueError: Found input variables with inconsistent numbers of samples: [53940, 1064]
```
| guys, i am new to ML. i was trying to make linear regression model on fmri dataset. while passing x_train,y_train code , it shows me following error | CC BY-SA 4.0 | null | 2023-05-21T20:35:20.357 | 2023-05-22T13:00:10.977 | 2023-05-21T20:42:31.013 | 75157 | 150075 | [
"machine-learning"
] |
121677 | 1 | null | null | 1 | 31 | I am simulating virtual species on a 100x100 grid (the size for now). Each grid layer represents one environmental variable. The "suitability function" defines the probability of a presence at a given point based on the value of each variable (grid layer). Actual probabilities are used (using Bernoulli trials), rather than the threshold approach (where all probabilities above a given level are set to true). Stratified random sampling is used. The sampling and determination of whether a point is a presence or absence is done in the same step. Random forests are used to create a species distribution model (SDM).
Is there some way to do a power analysis which tells you what sample size you need to achieve some measurable metric, such as AUC-ROC or Brier score (or other type of metric)?
| Can you do a power analysis to determine the sample size for a virtual species simulation which is modeled using random forests? | CC BY-SA 4.0 | null | 2023-05-21T21:49:52.063 | 2023-05-25T22:24:48.283 | 2023-05-25T22:24:48.283 | 146059 | 146059 | [
"machine-learning",
"random-forest",
"hypothesis-testing",
"simulation"
] |
121678 | 2 | null | 121673 | 1 | null | `cross_val_score` is a convenience function which relies on `KFold` or `StratifiedKFold` (see [documentation](https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation)):
>
When the cv argument is an integer, cross_val_score uses the KFold or StratifiedKFold strategies by default, the latter being used if the estimator derives from ClassifierMixin.
This implies that by default `cross_val_score` won't split the data randomly, since by default `shuffle` is false for both [KFold](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold) and `StratifiedKFold`.
As per the documentation (see above), `StratifiedKFold` is used by `cross_val_score` if the classifier derives from `ClassifierMixin`. If so, the distribution of the classes will be strictly equivalent as the original distribution.
I tend to use `KFold` because it gives more control over what's being done.
| null | CC BY-SA 4.0 | null | 2023-05-21T22:50:51.937 | 2023-05-21T22:50:51.937 | null | null | 64377 | null |
121679 | 2 | null | 39985 | 0 | null | You can got 2 ways:
- Either you ensemble all your models results with a Voting
Regressor.
- You incrementally train your model
Given your use case I would go for the incremental training. Nevertheless, not all scikit-learn models support incremental training. You can check the list of classifiers that support incremental [training here](https://scikit-learn.org/0.15/modules/scaling_strategies.html).
As you are using a simple LASSO model, you can with SDGRegressor with L1 regularization.
| null | CC BY-SA 4.0 | null | 2023-05-21T23:36:01.507 | 2023-05-21T23:36:01.507 | null | null | 149901 | null |
121680 | 1 | null | null | 0 | 11 | I have been experimenting with features derived using retrocausality (not to be confused with data leakage) in training models. Are there any examples of prior work in the literature where this form of feature engineering has yielded success?
| Features derived using retrocausality | CC BY-SA 4.0 | null | 2023-05-21T23:49:14.990 | 2023-05-21T23:49:14.990 | null | null | 150079 | [
"feature-engineering"
] |
121681 | 1 | null | null | 1 | 14 | I am using a GNN to solve a problem in which I have a query target and an undirected graph. My goal is to emit a subset of nodes in the graph (via a node-wise binary prediction) whose features sum to the target query. I figured this would be relatively easy to learn but I have been struggling quite a lot.
To simplify things to the most basic, I currently have a model comprised of 2 GATConv (hidden size 16 down to 1-dim output) layers from pytorch geometric separated by a ReLU activation. I don't use edge weights and nodes only have two features- the feature of interest, and the query target (so this second target feature is identical for all nodes in the graph). I believed this set up would be the simplest way to introduce the query but perhaps this is causing issues. I also wonder if this lack of features could be an issue as well- perhaps there isn't enough information to meaningfully differentiate nodes, though I'm not convinced why that would interfere with the simple objective.
Finally, I use the BCEWithLogitsLoss since the target subgraph is a binary mask as well as an Adam optimizer. I know this loss function is not ideal for my real goal and I do have a custom loss function ready to use but to minimize potential sources of bugs, I reverted to using the CE loss.
I have a very large dataset at my disposal but in order to diagnose the issues, I've been working with as few as 10 or even 1 graph which happens to be mirrored to force the model to overfit. When plotting the evolution of logits for this 1 graph scenario, I notice that values move consistently in a mirrored fashion, whether I train for 10 epochs or 100, which makes sense given how message passing works, and I believe eliminates the possibility of homogenization of node representations. Loss also consistently drops but it seems like the signal is not strong enough to emit anything but all 0s, perhaps because target node masks are sparse.
Any help or advice on how to proceed? I am primarily hoping to find out if I am fundamentally misunderstanding something about GNNs such that my set-up will always fail or if there are other suggested next steps for things to try. I will note I have tried the typical suggestions- modulating learning rate, diff loss functions (tried Focal Loss), deeper/wider architectures, GCNConv layers, out-of-the-box GCN model in pytorch-geometric, dataset simplification.
| Trouble Training GNN for Binary Node Classification Task | CC BY-SA 4.0 | null | 2023-05-22T01:22:32.767 | 2023-05-25T08:24:44.740 | null | null | 150081 | [
"graphs",
"graph-neural-network",
"pytorch-geometric",
"gnn"
] |
121682 | 1 | null | null | 0 | 16 | I am working on marketing mix models and I would like to know more about the way to include and compute synergy in marketing mix model? I know the existence of one software that uses this kind of model (a log linear model) giving us synergy terms between all marketing variables, but I don't see where it comes from and it does not seem to be just the add of interaction terms (we are talking about model with a lot of explanatory variables..).
I would like to know if you are aware of any marketing research papers on the subject I would appreciate it.
Thank you a lot
| Marketing Mix modeling and synergy terms | CC BY-SA 4.0 | null | 2023-05-22T06:28:20.863 | 2023-05-22T06:28:20.863 | null | null | 149154 | [
"linear-regression",
"marketing",
"feature-interaction"
] |
121683 | 1 | 121684 | null | 0 | 23 | Can I consider 20% / 80% as a balanced dataset?
My target variable ratio is 62% and 37%. I hope It is a balanced dataset. Please let me know if I am wrong.
However, I would like to know, what is the minimum ratio to consider the data set as balanced for the classification algorithm?
| What is the minimum ratio to consider the data set as balanced for the classification algorithm? | CC BY-SA 4.0 | null | 2023-05-22T07:15:19.383 | 2023-05-22T07:24:25.280 | null | null | 150091 | [
"machine-learning",
"python",
"classification",
"pandas"
] |
121684 | 2 | null | 121683 | 1 | null | Here's a [table](https://developers.google.com/machine-learning/data-prep/construct/sampling-splitting/imbalanced-data) for 'degree of imbalance' for a binary classification problem. If you have multiple classes in your data set, I'd assume that if any of the classes deviates with a similar degree as explained in that link, you have an imbalanced data set (or, at the very least, imbalanced representation of that minority class):
|Degree of imbalance |Proportion of Minority Class |
|-------------------|----------------------------|
|Mild |20-40% of the data set |
|Moderate |1-20% of the data set |
|Extreme |<1% of the data set |
| null | CC BY-SA 4.0 | null | 2023-05-22T07:24:25.280 | 2023-05-22T07:24:25.280 | null | null | 142205 | null |
121685 | 1 | null | null | 1 | 11 | I'm working on a project to plug the good old speech recognition in my app. However, I wish to do it in my country's dialect which is not supported by the major APIs like Azure, AWS, etc.
My country's national language is supported by them and this dialect is pretty similar. So I'm wondering whether is it a good idea to customise these pre-trained models or should I just start from scratch.
Thanks and any help would be appreciated!
| Speech to Text for Unsupported Language | CC BY-SA 4.0 | null | 2023-05-22T08:24:17.607 | 2023-05-22T08:24:17.607 | null | null | 150093 | [
"nlp",
"transfer-learning",
"speech-to-text"
] |
121686 | 1 | 121719 | null | 1 | 26 | I stumbled into [this blog](http://www.clungu.com/Distilling_a_Random_Forest_with_a_single_DecisionTree/) which shows how a decision tree trained to overfit the predictions of a properly trained random forest model is able to generalize in pretty much the same way as the original random forest.
I'm interested in this as I'm implementing ML in embedded settings where a 1000 decisors RF is simply not feasible but a simpler tree with 10s of branches may be doable.
My first question: is this too good to be true? The only downside I can see is that the resulting decision tree will be very large due to the overfitting process but in any instance I assume it to be simpler than the full random forest.
Secondary question: is there anything in literature that discusses this process in more detail?
| Distilling a Random Forest to a single DecisionTree, does it make sense? | CC BY-SA 4.0 | null | 2023-05-22T08:53:04.833 | 2023-05-24T04:40:54.557 | null | null | 70702 | [
"random-forest",
"decision-trees",
"knowledge-distillation"
] |
121687 | 1 | null | null | 0 | 48 | I have two time series that demonstrate the evolution of number of papers published in Pubmed: the total number of papers and the number of papers contatining 'holy grail' in the text.
[](https://i.stack.imgur.com/m4gO3.png)
I want to test the hypothesis if the number of 'holy grail' tokens will grow at the same rate as the total number of papers, so that the proportion of those papers to the total will be constant. What is the best way to test that?
| Stationarity of growth hypothesis | CC BY-SA 4.0 | null | 2023-05-22T09:18:44.557 | 2023-05-23T14:26:18.413 | null | null | 150095 | [
"time-series"
] |
121688 | 1 | null | null | 0 | 50 | I am currently working on a very imbalanced dataset:
- 24 million transactions (rows of data)
- 30,000 fraudulent transactions
(0.1% of total transactions)
and I am using XGBoost as the model to predict whether a transaction is fraudulent or not. After tuning some hyperparameters via optuna, I have received such results
- F1 Score on Training Data : 0.5881226277372263
- F1 Score on Validation Data : 0.8699220352892901
- ROC AUC score on Training Data : 0.9991431591607794
- ROC AUC score on Validation Data : 0.9254554224474641
Although the ROC AUC score are quite high, the F1 score of my trainig data is quite low and its ROC AUC score is abnormally high. I was wondering what is wrong with my model, or my data? Am I overfitting, and are these results acceptable?
| How to intrepret low F1 score and high AUC on training set? | CC BY-SA 4.0 | null | 2023-05-22T10:07:07.883 | 2023-05-22T13:09:07.663 | null | null | 147867 | [
"machine-learning",
"classification",
"xgboost",
"imbalanced-data"
] |
121689 | 2 | null | 121688 | 2 | null | Your model is overfitting. This is evident from the high ROC AUC score on the training data and the lower F1 score on the validation data. The ROC AUC score is a measure of the model's ability to distinguish between the positive and negative classes, while the F1 score is a measure of the model's precision and recall. The fact that the ROC AUC score is so high on the training data indicates that the model is learning to identify the positive class very well, but it is also learning to identify the negative class very well. This is a sign of overfitting, as the model is learning to identify patterns in the training data that are not generalizable to the validation data.
| null | CC BY-SA 4.0 | null | 2023-05-22T10:57:28.933 | 2023-05-22T13:09:07.663 | 2023-05-22T13:09:07.663 | 135031 | 135031 | null |
121690 | 2 | null | 121676 | 0 | null | In addition to what Nick ODell pointed out in the comments about X not being defined, the ValueError you're seeing is because you are calling train_test_split on two different sized arrays. Your X and y inputs to train_test_split need to be the same sizes, here they are 53940 and 1064 respectively. In other words, for every input, you need exactly one output. hth.
| null | CC BY-SA 4.0 | null | 2023-05-22T11:55:15.777 | 2023-05-22T11:55:15.777 | null | null | 146483 | null |
121691 | 2 | null | 121676 | 0 | null | Yes, please check `x`in `x=df[['timepoint','subject']]` versus `X` in `train_test_split(X...`
However,
```
df=sns.load_dataset('fmri')
```
This should be
```
df=pd.read_csv('fmri') # but I don't know the format for this data set
```
Just be sure that the script is run from the same directory as `fmri`
A small example data set would be cool. You are using both `sns` and `matplotlib` for plotting, but `ins` will not load a data set thats `pandas`
| null | CC BY-SA 4.0 | null | 2023-05-22T13:00:10.977 | 2023-05-22T13:00:10.977 | null | null | 67203 | null |
121692 | 1 | null | null | 1 | 42 | Hello i am having a classification between two classes A and B and i have trained CNN model. I have high accuracy on all three set of data i.e training (98.7%) validation (99.3%) and test(98%) but still can not predict on real data could any please help to sort out the issue.
p.s i have balance data.
```
train_augmented = ImageDataGenerator(rescale=1./255,
rotation_range = 40,
horizontal_flip = True,
zoom_range = 0.2
)
test_augmented = ImageDataGenerator(rescale=1./255)
validation_augmented = ImageDataGenerator(rescale=1./255)
training_set = train_augmented.flow_from_directory(train,
target_size=(150,150),
batch_size = 32,
class_mode = 'binary',
color_mode="rgb",
shuffle=True
)
test_set = test_augmented.flow_from_directory(test,
target_size=(150,150),
batch_size = 32,
class_mode = 'binary',
color_mode="rgb",
shuffle=False
)
validation_set = validation_augmented.flow_from_directory(validation,
target_size=(150,150),
batch_size = 32,
class_mode = 'binary',
color_mode="rgb",
shuffle=False
)
model = Sequential()
model.add(Conv2D(32,(3,3),activation='relu',input_shape=(150,150,3)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Conv2D(64,(3,3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1,activation='sigmoid'))
model.summary()
model.compile(Adam(learning_rate=0.001),loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit_generator(training_set,
epochs = 30,
validation_data = validation_set)
```
| High accuracy on test and validation data but still can not predict on real data | CC BY-SA 4.0 | null | 2023-05-22T13:17:43.900 | 2023-05-29T08:39:12.570 | null | null | 150105 | [
"machine-learning",
"deep-learning",
"keras",
"tensorflow",
"computer-vision"
] |
121693 | 1 | null | null | 0 | 30 | I am following keras example to classify time series using transformers. [Timeseries classification with a Transformer model](https://keras.io/examples/timeseries/timeseries_transformer_classification/)
The creation of the model is presented in the following code snippet:
```
def transformer_encoder(inputs):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(key_dim=256, num_heads=4, dropout=0.25)(x, x)
x = layers.Dropout(0.25)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=4, kernel_size=1, activation="relu")(x)
x = layers.Dropout(0.25)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
def build_model(input_shape,num_transformer_block):
inputs = keras.Input(shape=input_shape)
x = inputs
for _ in range(num_transformer_blocks):
x = transformer_encoder(x)
x = layers.GlobalAveragePooling1D(data_format="channels_first")(x)
x = layers.Dense(128, activation="relu")(x)
x = layers.Dropout(0.4)(x)
outputs = layers.Dense(n_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
```
I am using a different dataset and the shape of my data is different too.
Also my data is not in fixed length, so I am trying to add masking to my model to ignore the missing time steps.
As for now I tried few options but none of them worked.
I tried to add a Masking layer after the input layer:
```
def build_model(input_shape,num_transformer_block):
inputs = keras.Input(shape=input_shape)
x = Masking(input)
...
```
And I also tried to compute the masking of the data manually and pass it as the attention_mask argument in the MultiHeadAttention
To verify that the masking didn't succeed I replaced all the values in my data set to a constant number (e.g 500) and the model could still classify the data to the correct classes, I think it learned the length of the padding.
The shape of my data is with padding is (15, 7)
How do I apply the padding on this model correctly?
| Keras masking with MultiHeadAttention | CC BY-SA 4.0 | null | 2023-05-22T13:24:45.860 | 2023-05-22T13:24:45.860 | null | null | 150108 | [
"keras",
"transformer",
"attention-mechanism",
"masking"
] |
121694 | 1 | 121695 | null | 2 | 60 | I am working with a dataset that comes in with nonsense field names in the first row, with the actual field names in the second row.
Currently I'm using this script:
```
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('CAR_551.csv')
df1 = df.iloc[1:,:]
print(df1.head(10))
df1.info()
```
The dataset appears to be successfully updated. When I print the dataset the first row is gone, however when I call df1.info() it still returns the original headers. Example output:
```
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 StartDate 10 non-null object
1 EndDate 10 non-null object
2 Status 10 non-null object
3 IPAddress 10 non-null object
4 Progress 10 non-null object
5 Duration (in seconds) 10 non-null object
6 Finished 10 non-null object
7 RecordedDate 10 non-null object
8 ResponseId 10 non-null object
9 RecipientLastName 0 non-null object
10 RecipientFirstName 0 non-null object
11 RecipientEmail 0 non-null object
12 ExternalReference 0 non-null object
13 LocationLatitude 10 non-null object
14 LocationLongitude 10 non-null object
15 DistributionChannel 10 non-null object
16 UserLanguage 10 non-null object
17 Q6#1_1 10 non-null object
18 Q6#1_2 10 non-null object
19 Q6#1_3 10 non-null object
20 Q6#1_4 10 non-null object
21 Q6#1_5 10 non-null object
22 Q6#1_6 10 non-null object
23 Q6#1_7 10 non-null object
24 Q6#1_8 10 non-null object
25 Q6#1_9 10 non-null object
26 Q6#2_1 10 non-null object
27 Q6#2_2 10 non-null object
28 Q6#2_3 10 non-null object
29 Q6#2_4 10 non-null object
30 Q6#2_5 10 non-null object
31 Q6#2_6 10 non-null object
32 Q6#2_7 10 non-null object
33 Q6#2_8 10 non-null object
34 Q6#2_9 10 non-null object
35 Q8 10 non-null object
36 Q9 10 non-null object
dtypes: object(37)
memory usage: 3.0+ KB
```
These are all field names from the first row of the dataset. Is there a way to get this to update and show the field names from row 2 of the data?
Can someone explain the underlying mechanism of what's going on here? How is the first row even stored in the new dataframe at all if I specified only the second row on should be included when I declared the variable?
| Why does pandas.dataframe.info() not update when I delete the first row of a dataset? | CC BY-SA 4.0 | null | 2023-05-22T13:50:24.953 | 2023-05-22T23:07:01.463 | null | null | 76182 | [
"pandas",
"data-cleaning",
"dataframe"
] |
121695 | 2 | null | 121694 | 1 | null | The solution is better achieved via,
```
df = pd.read_csv('CAR_551.csv', skiprows=[0])
```
---
I checked the solutions.
Solution 1
```
df.columns = df.iloc[0]
```
There's a problem because a blank line (or line of junk) is now carried into the dataframe. The outcome will depend on what the first row of the csv is. So the minimum it will do is append a blank line above dataframe.
Thus, `df.to_csv('myfile')` will start with a blank line, before the column headers. Its not clear what this is doing to internal dataframe operations. More seriously it can also include elements of the first junk row of the csv file. In my example I had a column name at iloc[24], axis=1 retained, which junk. It appears the behaviour is unpredictable.
It would cause problems if this was reimported, because the same problem of the first row is junk continues.
Solution 2
```
df = pd.read_csv('CAR_551.csv', skiprows=[0])
```
Simply doesn't import the junk blank line and new 'phantom' lines are no longer part of the dataframe. The `to_write` is perfect, the first line is the header.
| null | CC BY-SA 4.0 | null | 2023-05-22T14:58:50.177 | 2023-05-22T23:07:01.463 | 2023-05-22T23:07:01.463 | 67203 | 67203 | null |
121696 | 1 | null | null | 1 | 41 | When we want to compare between loss and val_loss, do we only care about the last epochs? Do the following charts show that the model is not overfitted nor underfitted? Can you please give me comments and interpretations about the 4 charts? This will really help me learn.
[](https://i.stack.imgur.com/LzQMR.png)
[](https://i.stack.imgur.com/zLpPE.png)
[](https://i.stack.imgur.com/hduce.png)
[](https://i.stack.imgur.com/04gva.png)
The problem is a regression time series problem, nearly 300 records for training and each record representing a different day, is that enough to use LSTM?
Thanks in advance.
| Questions about time series and overfitting and underfitting | CC BY-SA 4.0 | null | 2023-05-22T16:11:43.107 | 2023-05-22T16:11:43.107 | null | null | 126059 | [
"machine-learning",
"time-series",
"regression",
"lstm"
] |
121697 | 2 | null | 121694 | 0 | null | I was able to figure this out. After removing the original first row, I hade to declare that the new first row were to be used as column names. Code below:
```
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('CAR_551.csv')
df.columns = df.iloc[0]
df1 = df.iloc[:,17:]
df2 = df1.replace({pre_substring:"PRE", post_substring:"POST"}, regex=True)
df2.columns = df2.iloc[0]
```
| null | CC BY-SA 4.0 | null | 2023-05-22T16:34:17.587 | 2023-05-22T16:34:17.587 | null | null | 76182 | null |
121698 | 1 | null | null | 0 | 21 | I have a personal project to create predictions for tennis matches.
It currently consists of a Python application and a MySQL database. I extract data from various websites and APIs and store it in the database. I've also developed a feature pipeline that calculates a number of features from data. As the features can take a a considerable time to calculate I've also created a feature store in the database. For the purposes of this question let's assume I store all of the info in three tables:
- match contains details of the players and tournament (including location)
- weather contains details of the latest weather forecast for a given location
- feature_store is my current feature store
`weather` is joined to `match` via a `location_id` field and there can be multiple `weather` records for each `match` record.
`feature_store` is joined to `match` via a `match_id` field and there can be multiple `feature_store` records for each `match` record as a new set of features need to be engineered every time there is a new `weather` record added for the match location.
As a further bit of info - I use the primary key of the `feature_store` table as a foreign key elsewhere in the database. This is so I can track back to see what features were engineered and when. For brevity I've not included details of the associated tables.
Everything has been working fine but I'm now wanting to add to the number of features. However, as I need to preserve the existing `feature_store` records as is I can't simply add a column and then add values to the existing rows.
Consequently, in order to add a new feature and maintain my audit trail I will need to add a new column and then create a new row where I copy all of the values for each existing `feature_store` record and then add the value for the new feature. This really isn't a great solution because lots of data is duplicated.
I've thought about creating a new feature store table where I put all the new features but creating a new table every time I want to add new features doesn't seem sustainable. That said I did come across a [post](https://datascience.stackexchange.com/q/32414/150114) considering a solution where each feature was stored in a separate table!
I've read lots of feature store articles but none of them go into any depth about data structures and how to manage changes.
Would anyone here have any experience of how to design a feature store that has some form of version tracking built into the data structure? Do I just have to deal with the fact that lots of feature values need to be duplicated or is there a different structure I could consider?
| How would I design a database structure for a feature store? | CC BY-SA 4.0 | null | 2023-05-22T22:10:05.293 | 2023-05-22T22:10:05.293 | null | null | 150114 | [
"feature-engineering",
"databases",
"features",
"data-engineering"
] |
121699 | 1 | null | null | 1 | 31 | I have a model that does binary classification. I train the same model many times, say, 20 times. In these 20 training runs:
- roughly 10% of the training attempts learn fast and the model performs well (e.g., reaching 95% AUC) in a few epochs.
- roughly 80% of the training attempts do not learn at all (i.e., getting stuck at 50% AUC) after 100 epochs.
- A few training runs are in between, meaning that they get stuck at 50% AUC for many epochs but after a while they learn as fast as the models in the first scenario.
My guess is that perhaps somehow the loss function has a very sharp downward spike like this:
[](https://i.stack.imgur.com/hyPMp.png)
So it needs some "luck" for the optimizer and initial weights to be "just right" to find it.
My questions are:
1. Is this (i.e., only a few training runs give us good result while the rest of them do not improve the model at all) common in the industry?
If the answer is yes, I can think of the following:
- Tune the learning rate (which is I am currently trying, I reduced the learning rate, so that hopefully the optimizer will not "miss" the trough)
- Try different optimizers, SGD/Adam/RMSprop
- Try some initialization techniques (Xavier Glorot? I didn't explore much in this regard yet)
2. Is there any other thing I can do to improve this?
EDIT1
For the sake of completeness of the question, a simplified version of the model is pasted here:
```
model = keras.Sequential([
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.MaxPooling2D(pool_size=(2, 2),strides=(2, 2)),
layers.Flatten(),
layers.Dense(4096, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.build((None,) + input_shape)
optimizer = keras.optimizers.Adam(learning_rate=5e-5)
#optimizer = keras.optimizers.SGD(learning_rate=1.0e-3, nesterov=True)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC() , tf.keras.metrics.AUC(curve='PR')]
)
```
| Is it common that a model only learns in a few training runs and doesn't learn in the majority of training runs | CC BY-SA 4.0 | null | 2023-05-23T02:09:58.833 | 2023-05-23T14:41:51.410 | 2023-05-23T14:41:51.410 | 149935 | 149935 | [
"optimization",
"learning-rate"
] |
121700 | 2 | null | 23858 | 0 | null | Yes, connect the embeddings by their similarity. Then perform Louvain over the graph. Prune first to eliminate weak connections.
| null | CC BY-SA 4.0 | null | 2023-05-23T04:00:43.747 | 2023-05-23T04:00:43.747 | null | null | 430 | null |
121702 | 1 | null | null | 0 | 11 | Recently I am reading the following paper ([link](https://faculty.ucmerced.edu/mhyang/papers/eccv16_rnn_filter.pdf))
```
Liu, Sifei, Jinshan Pan, and Ming-Hsuan Yang. “Learning Recursive Filters for Low-Level Vision via a Hybrid Neural Network.” In Computer Vision – ECCV 2016, edited by Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, 9908:560–76. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2016. https://doi.org/10.1007/978-3-319-46493-0_34.
```
In the paper, the hidden state of the recurrent neural network, after some simplifications, is modeled as (relation (9) in the paper)
```
h[k] = (1-p) \cdot x[k] + p \cdot h[k - 1].
```
where, `p`, and `x` are `n \tmes 1` vector. Then, it is stated that the derivations with respect to h[k], denoted as \theta [k] is (relation (10) in the paper)
```
\theta[k] = \delta[k] + p \cdot \theta [k + 1]
```
My question: How has this derivation been computed?
| How to find the derivative of the hidden state of recurrent neural networks? | CC BY-SA 4.0 | null | 2023-05-23T07:13:18.220 | 2023-05-23T07:39:40.280 | 2023-05-23T07:39:40.280 | 44951 | 44951 | [
"backpropagation",
"recurrent-neural-network",
"derivation"
] |
121703 | 1 | null | null | 0 | 22 | When I'm making data transformation pipeline on a dataset I keep getting error as " all the input array dimensions except for concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 1" if someone know what it is, please help, I've tried removing the rows, checked the code but still getting the error
| Data transformation pipeline error | CC BY-SA 4.0 | null | 2023-05-23T07:13:32.943 | 2023-05-23T07:27:07.703 | null | null | 150129 | [
"dataset",
"pipelines",
"transformation"
] |
121704 | 2 | null | 121703 | 0 | null | When concatenating arrays together, you are effectively "gluing" them onto each other. This only works, if they have the same size along the axis they are concatenated. For example, if your first array is of size 2x5, your second array has to be of size 2x5 as well as all the following arrays. You cannot concatenate a 2x4 array onto the stack of 2x5-sized-arrays.
Your error means that you are concatenating arrays onto each other that are of different shapes. In your specific case as per the error message, the first array is your problem, as its size is different from the others.
To solve this problem, you have to ensure that your arrays along the concatenating axis are all the same shape or size. I cannot tell you exactly how to do that, because I don't know your data or which libraries you are using, but you could do something like this to check the dimensions of the individual arrays:
```
for array in arrays_along_concat_axis:
print(array.shape)
```
| null | CC BY-SA 4.0 | null | 2023-05-23T07:27:07.703 | 2023-05-23T07:27:07.703 | null | null | 141828 | null |
121705 | 1 | null | null | 0 | 19 | I have some questions about the interaction plot. I tried to make it on my own but I am wrong in my approach and I would like to know how to construct this. I have made a log linear regression with weekly data and I am interested in interactions between explanatory variables. My model is at follows :
$$
log(y_t) = log(\beta_0) + \beta_{1}log(x_{1,t}) + \beta_{2}log(x_{2,t}) + \beta_{3}x_{3,t} + ... + \beta_{n}x_{n,t} + \epsilon_{t}
$$
So to construct my interaction plot in order to see interaction between the variable $x_1$ and the variable $x_2$, I define the function of $x_{1}$
$$
f_{x_{2,t} = a}(x_{1,t}) = log(\beta_0) + (\hat\beta_{1})log(x_{1,t}) + (\hat\beta_{2})log(a) + ... + (\hat\beta_{n})x_{n,t} + \epsilon_{t}
$$
where I decided to fix the values for $x_{i,t}$, $i\neq 1$ and consider the variable for which I want to look the interaction with $x_1$ as a parameter fixed at $a$ and $\hat(\beta)$ are the parameter estimation. Then, I decided to consider the same function with another value parameter for $x_{2,t}$ that is
$$
f_{x_{2,t} = c}(x_{1,t}) = log(\beta_0) + (\hat\beta_{1})log(x_{1,t}) + (\hat\beta_{2})log(c) + ... + (\hat\beta_{n})x_{n,t} + \epsilon_{t}
$$
But my approach seems not good because what is changing in my functions is just the ordinate at the origin, so I cannot use the criterion about parallel curves.. Is someone knows how my approach fails ? And also, is this kind of approach possible for continuous variable or are we limited to factor variables ?
Thank you a lot !
| Interaction plot | CC BY-SA 4.0 | null | 2023-05-23T08:21:05.490 | 2023-05-23T14:44:29.023 | 2023-05-23T14:44:29.023 | 97682 | 149154 | [
"linear-regression",
"parallel",
"feature-interaction"
] |
121706 | 1 | null | null | 0 | 18 | I have a ML model (trained in Sklearn) and based on it I have created a Flask web service and hosted it on Windows IIS server.
What is the best practice to load the model?
Shall I load the model when we start the API or model should be loading when the request coming?
Case1
```
import flask
import joblib
app = Flask(__name__)
# load the models
MODELS = joblib.load(model_file)
# endpoints
@app.route("/predictions", methods=["GET", "POST"])
def predictions():
# some code
```
case2
```
import flask
import joblib
app = Flask(__name__)
# endpoints
@app.route("/predictions", methods=["GET", "POST"])
def predictions():
# load the model
model = joblib.load(model_file)
```
| What is a better way of loading a model in Flask service hosted with IIS | CC BY-SA 4.0 | null | 2023-05-23T09:23:52.383 | 2023-05-23T10:13:32.767 | null | null | 44083 | [
"machine-learning-model",
"flask",
"iis"
] |
121707 | 2 | null | 121706 | 1 | null | If you load the time every time you have an incoming request, you would be increasing the latency. Usually, you want to minimize request latency, so it would be better to load the model at the beginning and just use it when you fulfil the request. This approach also saves unnecessary duplication.
| null | CC BY-SA 4.0 | null | 2023-05-23T10:13:32.767 | 2023-05-23T10:13:32.767 | null | null | 14675 | null |
121708 | 2 | null | 121699 | 1 | null | You are probably right that such inconstant behavior indicates that you result greatly depends on how lucky you are with the starting random weights.
You probably need to find the right number for `learning rate` and the `number of epochs`.
You can try a [learning rate finder](https://fastai1.fast.ai/callbacks.lr_finder.html) in order to find the optimal learning rate automatically.
Also, you did not specify what optimizer you are currently using, but I would suggest you try [binary cross entropy](https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html) since you are doing a binary classification.
| null | CC BY-SA 4.0 | null | 2023-05-23T11:20:44.780 | 2023-05-23T11:20:44.780 | null | null | 150130 | null |
121709 | 1 | 121714 | null | 0 | 24 | In `ggplot2` package documentation for `geom_freqpoly`, there is a formula in the last example which use a function to dynamically calculate a bandwidth in histograms of different variables with different ranges. Does anyone know the intuition behind this formula (last line of code following the `functionx(x)`)?
Copy and pasted from the documentation:
```
# You can specify a function for calculating binwidth, which is
# particularly useful when faceting along variables with
# different ranges because the function will be called once per facet
ggplot(economics_long, aes(value)) +
facet_wrap(~variable, scales = 'free_x') +
geom_histogram(binwidth = function(x) 2 * IQR(x) / (length(x)^(1/3)))
```
| What is the reason behind this formula: 2 times interquartile range divided by length to the power of one third | CC BY-SA 4.0 | null | 2023-05-23T12:48:18.453 | 2023-05-23T21:24:27.857 | null | null | 127461 | [
"r",
"ggplot2",
"histogram"
] |
121710 | 2 | null | 121687 | 0 | null | You can try implementing - VAR model. A statistical model called vector autoregression (VAR) is used to depict the relationship between various quantities as they change over time. With the help of this model you would be able to identify , if "holy grains" token are related to the total number of papers and vice-versa
| null | CC BY-SA 4.0 | null | 2023-05-23T14:26:18.413 | 2023-05-23T14:26:18.413 | null | null | 144743 | null |
121711 | 2 | null | 93256 | 0 | null | You can use isolateforest to find anomalies in the data. A value of -1 indicates an anomaly has occurred. isolateforest is looking for noise in the data. you then use a threshhold to remove that data as a subset to cleanup the classifications. you use elbow and pca to find the number of cluster groups and svm.
```
data="""ID EyeColor HairColor EducationLevel Income
1 1 1 1 1
2 1 1 2 2
3 2 2 1 1"""
df = pd.read_csv(io.StringIO(data), sep='\t')
print(df.head() )
clf=IsolationForest()
X=df[["EyeColor","HairColor","EducationLevel"]]
y=df["Income"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
#n_estimators, max_samples, max_features
#-1 represents the outliers (according to the fitted model)
clf = IsolationForest(max_samples=2,n_estimators=10, random_state=10)
clf.fit(X_train)
y_pred_test = clf.predict(X_test)
cm=confusion_matrix(y_test, y_pred_test)
#sns.heatmap(cm)
def plot_detected_anomalies(X, true_labels, predicted_anomalies):
#y_pred_inliers = X[predicted_anomalies == -1, :]
# PLOTTING RESULTS
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.scatter(X[:, 0], X[:, 1], c=true_labels)
plt.title('Clean data and added noise - TRUE')
plt.xlim([-11, 11])
plt.ylim([-11, 11])
plt.subplot(122)
plt.scatter(X[:, 0], X[:, 1], c=predicted_anomalies)
plt.title('Noise detected via Isolation Forest')
plt.xlim([-11, 11])
plt.ylim([-11, 11])
plt.show()
plot_detected_anomalies(np.array(X_test[["EyeColor","EducationLevel"]]), y_test, y_pred_test)
```
| null | CC BY-SA 4.0 | null | 2023-05-23T14:36:42.763 | 2023-05-23T14:44:32.307 | 2023-05-23T14:44:32.307 | 111756 | 111756 | null |
121712 | 1 | null | null | 0 | 14 | I am using xgboost classifier for a binary classification problem. I used the predict() to get the class predictions (0/1) but I also used the predict_proba() method to get the class probabilities (my intention for doing so is to play around with the probability-threshold a bit). However, when I examined the false positives (and false negatives) and their corresponding predict_proba() outputs, I noticed that in many cases, even though probability for `class 0` is `**higher than 0.5**`, the predict() outputs a `class 1` (and similarly for false negatives). I am confused. I thought predict() methods utilizes a default threshold of 0.5 to put the datapoint in class 0 or 1. Here is an example:
```
0 1
2022-04-02 06:58 0.647819 0.352181
2023-02-10 22:12 0.717162 0.282838
2023-02-10 15:18 0.692320 0.307680
```
while the `predict()` outputted `class 1`.
Can someone explain this apparent discrepancy in the output (OR it could be my misunderstanding of how predict() works)?
Thanks.
| How do the predict() and predict_proba() for xgbClassifier() work? | CC BY-SA 4.0 | null | 2023-05-23T18:12:34.523 | 2023-05-23T18:12:34.523 | null | null | 3314 | [
"python",
"xgboost",
"xgboost-classifier",
"xgboost-predict"
] |
121713 | 1 | null | null | 0 | 23 | I mean let's say I have a mini batch, I take an example from it and for it I do the following:
- I do forward propagation.
- Using the output after forward propogation - I calculate the gradients of the parameters.
Then I take the next example from the mini-batch and repeat steps 1 and 2 for it, and after I get the parameter gradients I sum them with the gradients I got earlier.
After these actions are performed for all elements of the mini-batch, I use the resulting sum of gradients to find the average value of the gradients and use it to update the weights?
Am I correct?
| How exactly does the mini batch method work? | CC BY-SA 4.0 | null | 2023-05-23T20:29:11.647 | 2023-05-24T10:35:12.377 | 2023-05-23T20:29:45.430 | 150142 | 150142 | [
"gradient-descent",
"backpropagation"
] |
121714 | 2 | null | 121709 | 1 | null | This [function](https://en.wikipedia.org/wiki/Freedman%E2%80%93Diaconis_rule) is the [Freedman-Diaconis rule](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.650.2473&rep=rep1&type=pdf) to calculate the width of bins for a histogram.
[IQR](https://en.wikipedia.org/wiki/Interquartile_range) stands for Interquartile Range (apologies if you already know this, someone else may be seeing it here for the first time), which is the difference between the 3rd and 1st quartiles of the data distribution, also referred to as the middle 50% of data. `2*IQR` is an attempt to get the spread of most of the data. Dividing by the cubed root of the number of samples in the data leads to a statistical justification for choosing binwidth, rather than empirically guessing. In the paper, they provide a justification for `length(x)^(1/3)` based on observations and a mathematical proof. It basically improves the binwidth for large data samples.
| null | CC BY-SA 4.0 | null | 2023-05-23T21:24:27.857 | 2023-05-23T21:24:27.857 | null | null | 65153 | null |
121715 | 1 | null | null | 1 | 15 | Loss function for discriminator, which needs to be maximized: -log(D(x)) + log(1-D(G(z))).
Loss function for generator, which needs to be maximized: log(D(G(z)))
What would the calculation of the loss gradient of the output value of the discriminator look like?
What would the calculation of the loss gradient of the generator output look like?
| GAN Output Gradient Calculation | CC BY-SA 4.0 | null | 2023-05-23T21:31:08.307 | 2023-05-25T08:08:19.497 | null | null | 150142 | [
"loss-function",
"backpropagation",
"gan"
] |
121717 | 1 | null | null | 0 | 23 | [](https://i.stack.imgur.com/i7CHH.jpg)Question: I am not sure how to describe the sample graph attached. Can you please help me identify the type of plot and how to statistically measure the relationship between the dependent variable (Y-axis) Category A vs Category B?
What success looks like for me: Once I understand how to describe the plot.
- Is there a statistical method (preference in python) out there that can help me measure the relationship between the two categories of data (Category A & B) .
- Strength of the relationship between the two categories (Category A & B), positive or negative relationship?
- List item
My goal with the plot is :
- Y-axis: One dependent variable (positive real number) example is 'Average Number of days' required to complete one unit of work.
Consideration for :Y-axis: are as below:
- The dependent variable (y) can however be broken into two or more categories.
- One example of category is 'Worker type' = Employee or contractor.
Employee can be Category A, Contractor can be Category B.
- Second example of category is 'Country type' = India or USA.
India can be Category A, USA can be Category B.
- As such I can take any category of data from my dataset and split it into various categories (Co-located team v/s distributed teams, Standard team vs Non standard teams) etc.
:X-axis:
- continuous time series data. it can take shape of months, quarters or weeks etc.
[](https://i.stack.imgur.com/3MDRM.png)
| Help me identify the type of plot and the relationship between the dependent variables | CC BY-SA 4.0 | null | 2023-05-24T02:53:59.623 | 2023-05-24T19:32:12.663 | 2023-05-24T19:32:12.663 | 150144 | 150144 | [
"time-series",
"regression",
"linear-regression",
"logistic-regression",
"categorical-data"
] |
121718 | 1 | 121720 | null | 0 | 26 | I am using timeGAN from [ydata-synthetic](https://github.com/ydataai/ydata-synthetic/blob/dev/examples/timeseries/TimeGAN_Synthetic_stock_data.ipynb) repo.
Given a trained model `synth`, we generate synthetic data by:
```
synth_data = synth.sample(2000)
```
This will generate 2000 sequences randomly.
My question is, what if the original data has trend, and we wish to generate synthetic data which indicates the trend (similar size as original data)?
For example, suppose original data looks like below
[](https://i.stack.imgur.com/A4O5q.png)
and somehow we wish to generate synthetic data which also indicates the trend. Is it possible to do it? What I can think of is to increase `seq_len` to properly cover the trend.
Please help. Thanks.
| Generate Synthetic Data Indicating Original data's Trend | CC BY-SA 4.0 | null | 2023-05-24T04:02:32.347 | 2023-05-24T06:32:29.067 | null | null | 46384 | [
"time-series",
"gan"
] |
121719 | 2 | null | 121686 | 2 | null | No, this decision tree is unable to "generalize in the same way as the original random forest".
The author also clearly states this in section Does this aproximation hold for unseen data?: '... the only problem is that this strategy applies strictly only on the seen/available data'. At least no guarantee.
The main use of this method is its explainability - using a simple, easily explainable model to mimic the behavior of a more complex model so to gain insight on how the complex model make decision. However, again this explainability does not hold for unseen data in general.
| null | CC BY-SA 4.0 | null | 2023-05-24T04:40:54.557 | 2023-05-24T04:40:54.557 | null | null | 113067 | null |
121720 | 2 | null | 121718 | 1 | null | To the best of my knowledge, all generally used synthetic data generation methods scale their data to reside in $[0, 1]$ or $[-1, 1]$. This is also done in TimeGAN & RCGAN.
- If your data has a significant but regular downward trend, you probably want to reduce the trend in a data preprocessing step.
- If your data has significant and highly varying trends (one going upwards, the other going downwards), then you simply stumbled across a limitation in the architecture. These models work best on somewhat normally distributed data. If your time-series goes all over the place, the model will have a hard time converging. More research still has to be done into time-series generative networks to be able to predict such trends.
| null | CC BY-SA 4.0 | null | 2023-05-24T06:32:29.067 | 2023-05-24T06:32:29.067 | null | null | 134416 | null |
121721 | 1 | null | null | 0 | 28 | Suppose I have a paragraph which explains the injuries and its descriptions. I want to extract the injuries and its corresponding descriptions from the text. How can I do that?
For example, the paragraph will be as follows:
In my opinion the neck pain is due to the soft tissue injury. The fracture on the hand will be resolved in 2 months. The pain in the shoulder and neck is due to the soft tissue injury. There is a stiffness and discomfort around the hip.
the expected output is :
```
{
"neck": ["soft tissue"],
"hand": ["fracture"],
"shoulder": [ "soft tissue"],
"hip": ["stiffness", "discomfort"]
}
```
Which NLP techniques can be used here?
We have two txt files for injuries and descriptions.
But how will we relate or match the description with its corresponding injury?
I tried the dependency parser but the problem is we have to write a number of patterns for each injury, we have more than 100 injuries and more than 100 descriptions. So if we are writing patterns for all the injuries there will be a large number of patterns and I think it will take too much time and power.
Are there any other ways to do this kind of extraction?
The paragraph doesn't have a common structure.
I'm using python and spacy for this.
| What are the approaches for extracting an injury and its description from a paragraph? | CC BY-SA 4.0 | null | 2023-05-24T06:49:14.240 | 2023-05-24T12:48:03.503 | null | null | 57727 | [
"machine-learning",
"nlp",
"python-3.x",
"spacy",
"information-extraction"
] |
121722 | 1 | 121741 | null | 0 | 15 | I would like to train object detection model (e.g. YOLO) for images that contain anomalies. The anomalies are essentially the holes in a surface of different sizes. How do I label correctly such anomalies? Do I put the bounding boxes over each small hole or should I group smaller anomalies into one?
| Best practice labeling grouped anomalies for object detection | CC BY-SA 4.0 | null | 2023-05-24T09:44:55.393 | 2023-05-25T08:04:16.237 | 2023-05-24T11:27:04.383 | 14529 | 14529 | [
"deep-learning",
"cnn",
"object-detection",
"anomaly-detection",
"labels"
] |
121723 | 1 | null | null | 0 | 9 | I am trying to create a ranking model, where I am thinking about creating ground truth based on clicks by user. But at same time past clicks made by users seems like a vital input feature too. Any ideas how can i handle such a situation?
Edit: to clarify, If i include clicks in model input, and use it to create ground truth ranking. Model will just ignore every other input feature and focus on clicks. I am currently using xgboost (lamdamart) directly optimizing ndcg based on click. I have several features some of which are about how similar document is to query. others are about how popular document is compared to other documents. My ground truth ranking is based on how popular document is with a particular query.
| How to handle using input feature (clicks) when it is used in target too? | CC BY-SA 4.0 | null | 2023-05-24T10:16:23.427 | 2023-05-24T10:40:29.620 | 2023-05-24T10:40:29.620 | 82605 | 82605 | [
"feature-selection",
"feature-engineering",
"information-retrieval",
"ranking",
"search-engine"
] |
121724 | 2 | null | 121723 | 0 | null | Need a bit of clarity to say for sure, but in general we can use anything as feature as long as it is available at prediction time.
It is perfectly normal to use past history of a target to predict the future, e.g. using a customer's past purchase record to predict what/when he/she will buy next week.
| null | CC BY-SA 4.0 | null | 2023-05-24T10:26:13.053 | 2023-05-24T10:26:13.053 | null | null | 113067 | null |
121725 | 2 | null | 121713 | 0 | null | Your description looks conceptually correct to me. See [Hugh's answer to: How does minibatch gradient descent update the weights for each example in a batch?](https://stats.stackexchange.com/a/266977/354273) on Cross Validated for a detailed explanation.
However, as per @noe's comment, in practice mini-batches are not implemented by processing the examples one at a time. To speed up processing, most deep learning frameworks will implement this using matrix or tensor operations and process the entire mini-batch in one pass.
| null | CC BY-SA 4.0 | null | 2023-05-24T10:35:12.377 | 2023-05-24T10:35:12.377 | null | null | 135707 | null |
121726 | 1 | null | null | 0 | 14 | recently i have been trying to learn transformer and using it in caption-generator model.
While training for 4 hours val_loss and val_accuracy did not change. loss and accuracy for train_data was atleast moving a little.
(this output is from different training session but is quite similar to previous one with 4 hour training)
```
Epoch 1/10
100/100 [==============================] - 123s 566ms/step - loss: 12.8864 - masked_accuracy: 0.0140 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 2/10
100/100 [==============================] - 50s 498ms/step - loss: 13.0352 - masked_accuracy: 0.0199 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 3/10
100/100 [==============================] - 47s 473ms/step - loss: 13.0575 - masked_accuracy: 0.0197 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 4/10
100/100 [==============================] - 42s 419ms/step - loss: 13.0294 - masked_accuracy: 0.0198 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 5/10
100/100 [==============================] - 38s 380ms/step - loss: 13.0738 - masked_accuracy: 0.0203 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 6/10
100/100 [==============================] - 37s 370ms/step - loss: 13.0334 - masked_accuracy: 0.0190 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 7/10
100/100 [==============================] - 36s 363ms/step - loss: 13.0213 - masked_accuracy: 0.0197 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 8/10
100/100 [==============================] - 36s 365ms/step - loss: 13.0269 - masked_accuracy: 0.0206 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 9/10
100/100 [==============================] - 36s 364ms/step - loss: 13.0469 - masked_accuracy: 0.0193 - val_loss: 12.9553 - val_masked_accuracy: 0.0216
Epoch 10/10
```
## what could be the reason.
here is the model [code here github ](https://github.com/tikendraw/caption-generator)
open transformer.py and here is the [notebook](https://www.kaggle.com/code/tikendraw/caption-generator-kaggle/notebook)
```
def positional_encoding(length, depth):
depth = depth/2
positions = np.arange(length)[:, np.newaxis] # (seq, 1)
depths = np.arange(depth)[np.newaxis, :]/depth # (1, depth)
angle_rates = 1 / (10000**depths) # (1, depth)
angle_rads = positions * angle_rates # (pos, depth)
pos_encoding = np.concatenate(
[np.sin(angle_rads), np.cos(angle_rads)],
axis=-1)
return tf.cast(pos_encoding, dtype=tf.float32)
# Positional embedding For Image
class PositionalEmbedding(tf.keras.layers.Layer):
def __init__(self, vocab_size, d_model):
super().__init__()
self.d_model = d_model
self.embedding = tf.keras.layers.Embedding(vocab_size, d_model, mask_zero=True)
self.pos_encoding = positional_encoding(length=2048, depth=d_model)
def compute_mask(self, *args, **kwargs):
return self.embedding.compute_mask(*args, **kwargs)
def call(self, x):
length = tf.shape(x)[1]
x = self.embedding(x)
# This factor sets the relative scale of the embedding and positonal_encoding.
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x = x + self.pos_encoding[tf.newaxis, :length, :]
return x
class Patches(tf.keras.layers.Layer):
def __init__(self, patch_size):
super().__init__()
self.patch_size = patch_size
def call(self, images):
batch_size = tf.shape(images)[0]
patches = tf.image.extract_patches(
images=images,
sizes=[1, self.patch_size, self.patch_size, 1],
strides=[1, self.patch_size, self.patch_size, 1],
rates=[1, 1, 1, 1],
padding="VALID",
)
patch_dims = patches.shape[-1]
# (patches.shape)
patches = tf.reshape(patches, [batch_size, -1, patch_dims])
return patches
class PatchEncoder(tf.keras.layers.Layer):
def __init__(self, num_patches, d_model):
super().__init__()
self.num_patches = num_patches
self.projection = Dense(units=d_model)
self.position_embedding = Embedding(
input_dim=num_patches, output_dim=d_model
)
def call(self, patch):
positions = tf.range(start=0, limit=self.num_patches, delta=1)
# tf.print(positions.shape)
return self.projection(patch) + self.position_embedding(positions)
# Attention
class BaseAttention(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__()
self.mha = tf.keras.layers.MultiHeadAttention(**kwargs)
self.layernorm = tf.keras.layers.LayerNormalization()
self.add = tf.keras.layers.Add()
class CrossAttention(BaseAttention):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.last_attn_scores=None
def call(self, x, context):
attn_output, attn_scores = self.mha(
query=x,
key=context,
value=context,
return_attention_scores=True)
# Cache the attention scores for plotting later.
self.last_attn_scores = attn_scores
x = self.add([x, attn_output])
x = self.layernorm(x)
return x
class GlobalSelfAttention(BaseAttention):
def call(self, x):
attn_output = self.mha(
query=x,
value=x,
key=x)
# tf.print('attn_output: ',attn_output.shape)
x = self.add([x, attn_output])
# tf.print('concat: ',x.shape)
x = self.layernorm(x)
# tf.print('layernorm: ',x.shape)
return x
class CausalSelfAttention(BaseAttention):
def call(self, x):
attn_output = self.mha(
query=x,
value=x,
key=x,
use_causal_mask = True)
x = self.add([x, attn_output])
x = self.layernorm(x)
return x
class FeedForword(tf.keras.layers.Layer):
def __init__(self, d_model, dff, dropout_rate = 0.1):
super().__init__()
self.seq = tf.keras.Sequential([
Dense(dff, activation = 'relu'),
Dense(d_model),
Dropout(dropout_rate)
])
self.add = tf.keras.layers.Add()
self.layernorm = tf.keras.layers.LayerNormalization()
def call(self, x):
x = self.add([x, self.seq(x)])
return self.layernorm(x)
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, *, d_model, num_heads, dff, dropout_rate=0.1):
super().__init__()
self.self_attention = GlobalSelfAttention(
key_dim=d_model,
num_heads=num_heads,
dropout=dropout_rate
)
self.ffn = FeedForword(d_model=d_model, dff=dff,dropout_rate=dropout_rate)
def call(self, x):
x = self.self_attention(x)
x = self.ffn(x)
return x
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, patch_size, num_patches, dropout_rate=0.1):
super().__init__()
self.d_model = d_model
self.num_layers = num_layers
self.num_heads = num_heads
self.dff = dff
self.patch_size = patch_size
self.num_patches = num_patches
self.dropout_rate = dropout_rate
self.patches = Patches(patch_size)
# Encode patches.
self.encoded_patches = PatchEncoder(num_patches, d_model)
self.enc_layers = [
EncoderLayer(d_model=d_model,
num_heads=num_heads,
dff=dff,
dropout_rate=dropout_rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(dropout_rate)
def call(self, x):
# `x` is token-IDs shape: (batch, seq_len)
x = self.patches(x) # Shape `(batch_size, seq_len, d_model)`.
x = self.encoded_patches(x)
# Add dropout.
x = self.dropout(x)
for i in range(self.num_layers):
x = self.enc_layers[i](x)
return x # Shape `(batch_size, seq_len, d_model)`.
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, dropout_rate=0.1):
super(DecoderLayer, self).__init__()
self.causal_attention = CausalSelfAttention(
num_heads=num_heads,
key_dim = d_model,
dropout= dropout_rate
)
self.cross_attention = CrossAttention(
num_heads=num_heads,
key_dim = d_model,
dropout= dropout_rate
)
self.ffn = FeedForword(d_model=d_model, dff=dff,dropout_rate=dropout_rate)
self.last_attn_scores = self.cross_attention.last_attn_scores
def call(self, x, context):
x = self.causal_attention(x)
x = self.cross_attention(x=x, context = context)
x = self.ffn(x)
return x
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, vocab_size, dropout_rate=0.1):
super().__init__()
self.num_layers = num_layers
self.d_model = d_model
self.num_heads = num_heads
self.dff = dff
self.vocab_size = vocab_size
self.dropout_rate = dropout_rate=0.1
self.positional_embedding = PositionalEmbedding(vocab_size=vocab_size, d_model=d_model)
self.dec_layers = [
DecoderLayer(d_model=d_model, num_heads=num_heads,
dff=dff, dropout_rate=dropout_rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(dropout_rate)
self.last_attn_scores = None
def call(self, x, context):
# tf.print('x: ', x.shape)
# tf.print('context: ', context.shape)
x = self.positional_embedding(x)
# tf.print('pos-emb x: ', x.shape)
for i in range(self.num_layers):
x = self.dec_layers[i](x=x, context=context)
self.last_attn_scores = self.dec_layers[-1].last_attn_scores
# tf.print('afte tra x : ', x.shape)
return x
class CaptionGenerator(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, vocab_size, patch_size, num_patches, dropout_rate=0.1):
super().__init__()
self.encoder = Encoder(
num_layers=num_layers,
d_model=d_model,
num_heads=num_heads,
dff=dff,
patch_size=patch_size,
num_patches=num_patches,
dropout_rate=dropout_rate,
)
self.decoder = Decoder(
num_layers=num_layers,
d_model=d_model,
num_heads=num_heads,
dff=dff,
vocab_size=vocab_size,
dropout_rate=dropout_rate,
)
self.final_layer = tf.keras.layers.Dense(vocab_size
)
self.decoder = Decoder(
num_heads=num_heads,
num_layers=num_layers,
d_model=d_model,
dff=dff,
vocab_size=vocab_size,
dropout_rate=dropout_rate,
)
self.final_layer = tf.keras.layers.Dense(vocab_size)
def call(self, inputs): # sourcery skip: inline-immediately-returned-variable, use-contextlib-suppress
img, txt = inputs
img = self.encoder(img) # (batch_size, context_len, d_model)
x = self.decoder(x=txt, context=img) # (batch_size, target_len, d_model)
# Final linear layer output.
logits = self.final_layer(x) # (batch_size, max_len, target_vocab_size)
try:
# Drop the keras mask, so it doesn't scale the losses/metrics.
# b/250038731
del logits._keras_mask
except AttributeError:
pass
# Return the final output and the attention weights.
return logits
```
here are accuracy and loss
```
def masked_loss(y_true, y_pred):
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(
reduction='none')
loss = loss_fn(y_true, y_pred)
mask = tf.cast(y_true != 0, loss.dtype)
loss *= mask
return tf.reduce_sum(loss)/tf.reduce_sum(mask)
def masked_accuracy(y_true, y_pred):
y_pred = tf.argmax(y_pred, axis=-1)
y_pred = tf.cast(y_pred, y_true.dtype)
match = y_true == y_pred
mask = y_true != 0
match = match & mask
match = tf.cast(match, dtype=tf.float32)
mask = tf.cast(mask, dtype=tf.float32)
return tf.reduce_sum(match)/tf.reduce_sum(mask)
```
```
| val_accuracy and val_loss not changing while training transformer | CC BY-SA 4.0 | null | 2023-05-24T10:41:28.343 | 2023-05-24T10:41:28.343 | null | null | 136949 | [
"deep-learning",
"transformer",
"attention-mechanism"
] |
121727 | 1 | 121731 | null | 1 | 32 | After training a neural network (NN) to tell the difference between a clean audio signal and a signal with a specific "noise", what is the mechanics that actually takes place where an unseen noise filled audio file gets "cleaned" up by the machine learning model?
Its not filtering nor subtraction of the model on the input audio file, as the frequency component of the wanted audio file seems "undisturbed" although the noise frequency component overlaps with portions of the desired audio file.
Thanks for your time and help in giving some guidance to an answer to this question.
| What process actually takes place during audio feedback suppression machine learning | CC BY-SA 4.0 | null | 2023-05-24T11:34:18.243 | 2023-05-24T16:02:20.690 | null | null | 107073 | [
"machine-learning-model",
"audio-recognition",
"feedback-loop"
] |
121728 | 2 | null | 121717 | 0 | null | Not sure what your end goal is, but if you want one statistical variable to measure how similar the two values are, you can use `Null hypothesis` and `p-value`
Null hypothesis - says there is no statistical significance between the two events in the hypothesis.
p-value - which is basically a measure of probability that the null hypotheses is true. A p-value that is less than or equal to 0.05 usually indicates that there is strong evidence against the null hypothesis.
1 - cases are identical
0 - cases are completely different
Read more about [using p-value in python](https://realpython.com/numpy-scipy-pandas-correlation-python/).
You can also take a look at [Fréchet distance](https://en.wikipedia.org/wiki/Fr%C3%A9chet_distance)
| null | CC BY-SA 4.0 | null | 2023-05-24T11:53:52.057 | 2023-05-24T11:53:52.057 | null | null | 150130 | null |
121729 | 2 | null | 121721 | 0 | null | Given the unstructured nature of your injury descriptions, I don't think this is doable by means of classical NLP techniques. I suggest you use a large language model (LLM), either OpenAi's GPT family or something like Llama or RedPajama. Give it a prompt with an example and it should give you the result.
This would be an example of a possible prompt using the example in your question:
```
Given the description of the state of a patient, extract the diagnosis of their injuries:
Description: In my opinion the neck pain is due to the soft tissue injury. The fracture on the hand will be resolved in 2 months. The pain in the shoulder and neck is due to the soft tissue injury. There is a stiffness and discomfort around the hip.
Injuries: {
"neck": ["soft tissue"],
"hand": ["fracture"],
"shoulder": [ "soft tissue"],
"hip": ["stiffness", "discomfort"]
}
Description: the butt pain is due to the coccyx bone. The bruise of the arm is due to the soft tissue injury.
Injuries: {
```
The model would complete the injuries JSON for you. You would then parse it. Given the lack of diversity in your example, you probably need to provide a couple more of examples and possibly with a wider variety of injuries. Designing an effective prompt (aka "prompt engineering") is part of using LLMs.
Note that you don't need to retrain the model, you can just use pre-trained models as-is, providing a sensible prompt that makes the LLM give you the desired outputs.
As for which model to use, there are dozens nowadays. Some are general domain, some are trained on medical data. The licence of some of them allows commercial use, and others only allow research uses. Some are very large, and others are smaller. You should research the currently available pre-trained models and choose the one that gives good results while meeting your operational constrains.
| null | CC BY-SA 4.0 | null | 2023-05-24T12:48:03.503 | 2023-05-24T12:48:03.503 | null | null | 14675 | null |
121730 | 1 | null | null | 1 | 24 | I have collected all my data for a study and need to run my analysis but have come unstuck (I should have planned better beforehand I know).
I'm looking to see whether personality traits (five trait variables values ranging from 0-5) predict whether someone will give feedback at work (discrete outcome, yes/no) and the type they will provide (likert lots of positive feedback 1-6 or likert lots of negative 1-6)
Participants completed a survey on time one which captured their personality and then were invited to complete four additional weekly surveys (one survey each Friday for four consecutive weeks). These surveys captured data on the feedback they gave that week.
So, I have my independent variables (personality) and I'm trying to predict my repeated measures outcome variables (feedback).
I also have the extent to which the participants worked virtually as a moderator variable, collected alongside the feedback data (also repeated measures values ranging from 0-100). The research question here is whether working virtuality influences how extraverted (one of the five trait variables) participants give feedback and so will be a separate model to the one above.
What analysis do I run?
My instinct is that it should be a fixed effect model to see whether personality predicts (non-time varying variables) the feedback outcomes (time-varying variables) and a random effects, multilevel model to examine the moderating effect of virtual working (time-varying) on extraversion (non-time varying).
All analysis will be done in R.
| Which statistical technique should I use for a within-person repeated measures study? | CC BY-SA 4.0 | null | 2023-05-24T13:09:25.717 | 2023-05-25T08:00:15.053 | null | null | 150158 | [
"r",
"statistics",
"research",
"management"
] |
121731 | 2 | null | 121727 | 1 | null | The processing learned by neural networks is often referred to as a "black box" because we can't fully characterize it to understand it, that is, it's not "interpretable".
This way, the processing you refer to is something that the network learns during its training but it's not something that we can interpret. Therefore, the answer to your question is that "we don't know" the exact characteristics of the processing done by the network due to its very black-box nature.
You can check [this answer](https://datascience.stackexchange.com/a/22338/14675) for some more context on the interpretability of neural networks.
| null | CC BY-SA 4.0 | null | 2023-05-24T16:02:20.690 | 2023-05-24T16:02:20.690 | null | null | 14675 | null |
121732 | 1 | null | null | 1 | 55 | I am currently studying the YOLO algorithm for a project. What I'm not quite sure about is where exactly the input image is divided in an SxS grid. After my research on the paper, videos and websites I am still doubting whether the input image is divided before it is fed to the Convolutional neural network (see the architecture below) or divided inside the CNN. If so, the input is pre-distributed into the grid cells. How does the network take into account the grid cells in the Convolution layers? [](https://i.stack.imgur.com/jebaO.png)
I did research on the paper and other resources related to YOLO. I want to know when the input image is divided, with that I can understand the structure better.
| Where exactly in YOLO's architecture is the input image divided into a grid? | CC BY-SA 4.0 | null | 2023-05-24T16:22:34.830 | 2023-05-25T09:39:06.950 | 2023-05-25T09:39:06.950 | 150162 | 150162 | [
"cnn",
"computer-vision",
"object-detection",
"yolo"
] |
121733 | 1 | null | null | 0 | 13 | I'm tying to learn about recommendation systems recently. I have some deeplearning background so I focused more on machine learning based methods for recommendation systems. I see that a lot of paper directly train an embedding to represent the user (or part of the user). It is quite confusing to me since I believe that in real world, for big company like amazon or netflix. There will be new users every day. It's impossible to retrain the whole model to get embeddings for new users. So in real world, how do they deal with new users?
I have the same question for Matrix Factorization method. Is there any good source for answering those questions?
| In recommendation systems, for methods that use backproporgation to get user feature, do they need to retrain the whole model when a new user is added | CC BY-SA 4.0 | null | 2023-05-24T16:36:04.033 | 2023-05-24T16:36:04.033 | null | null | 150163 | [
"machine-learning",
"deep-learning",
"recommender-system"
] |
121734 | 1 | null | null | 1 | 17 | I am using timeGAN from [ydata-synthetic](https://github.com/ydataai/ydata-synthetic/blob/dev/examples/timeseries/TimeGAN_Synthetic_stock_data.ipynb) repo, and now question is about re-training the model.
Suppose we have trained a model, say `synth1`, based on a certain dataset. Now, we have new dataset which has similar data characteristics of the previous data. I am wondering if the `ydata-synthetic` package support to load the pre-trained model `synth1` (in pickle files), and then re-train to `synth2`?
I did tried loading model first and run model training like 5 epochs.
I expect the 5 epochs should not change model significantly, but it shows genertaing very different distribution.
| timeGAN Model Retraining | CC BY-SA 4.0 | null | 2023-05-24T17:28:54.897 | 2023-05-25T07:57:08.877 | null | null | 46384 | [
"time-series",
"gan"
] |
121735 | 1 | null | null | 0 | 29 | I know it's easy to do grid search for a simple Catboost model, such as in here: [https://medium.com/aiplusoau/hyperparameter-tuning-a5fe69d2a6c7](https://medium.com/aiplusoau/hyperparameter-tuning-a5fe69d2a6c7)
by running something like
```
cbc = CatBoostRegressor()
#create the grid
grid = {'max_depth': [3,4,5],'n_estimators':[100, 200, 300]}
#Instantiate GridSearchCV
gscv = GridSearchCV (estimator = cbc, param_grid = grid, scoring
='accuracy', cv = 5)
#fit the model
gscv.fit(X,y)
#returns the estimator with the best performance
print(gscv.best_estimator_)
```
Method like this did not have the input of categorical columns in the Catboost model.
But my question is how can I do grid search for categorical_cols specified?
For example, here is my code how I assign the categorical columns:
```
categorical_cols = ['site_number','product_key', 'manufacturer_desc']
# initialize Pool
train_pool = Pool(X_train,
y_train,
cat_features=categorical_cols)
test_pool = Pool(X_test,
cat_features=categorical_cols)
# specify the training parameters
model = CatBoostRegressor(iterations=150,
learning_rate = 0.5,
depth=8,
random_seed = 42
)
#train the model
model.fit(train_pool)
```
But this is the model without grid search. The question is how can I still do the grid search with the above categorical_cols specified. The train_pool and test_pool is already specified, not sure what's a best way.
Thanks!
| How to do grid search for Catboost with categorical_cols | CC BY-SA 4.0 | null | 2023-05-24T18:43:56.183 | 2023-05-24T20:34:20.070 | 2023-05-24T20:34:20.070 | 86650 | 86650 | [
"grid-search",
"catboost"
] |
121736 | 2 | null | 121735 | 0 | null | To perform a grid search with specified categorical columns in CatBoost, you can use the GridSearchCV function from Scikit-learn. You can define a parameter grid with different values for the hyperparameters you want to tune, including the categorical columns. Here's an example:
```
from catboost import Pool, CatBoostRegressor
from sklearn.model_selection import GridSearchCV
# define the parameter grid
params = {
'iterations': [100, 150, 200],
'learning_rate': [0.1, 0.5, 1],
'depth': [6, 8, 10],
'cat_features': [['site_number', 'product_key', 'manufacturer_desc'],
['site_number', 'product_key'],
['product_key', 'manufacturer_desc']]
}
# initialize Pool
train_pool = Pool(X_train,
y_train,
cat_features=categorical_cols)
test_pool = Pool(X_test,
cat_features=categorical_cols)
# initialize the model
cat = CatBoostRegressor(random_seed=42, silent=True)
# perform grid search with 5-fold cross-validation
grid_search = GridSearchCV(cat, param_grid=params, cv=5)
# fit the grid search to the data
grid_search.fit(train_pool)
# print the best hyperparameters
print(grid_search.best_params_)
```
In this example, the `params` dictionary contains different values for the `iterations`, `learning_rate`, `depth`, and `cat_features` hyperparameters. The `cat_features` parameter takes a list of lists, where each list is a different combination of categorical columns. The `GridSearchCV` function performs a grid search with 5-fold cross-validation to find the best combination of hyperparameters. The best hyperparameters can be accessed with the `best_params_` attribute of the `GridSearchCV` object (including the ones you provided in your code if they merit the best results).
| null | CC BY-SA 4.0 | null | 2023-05-24T20:33:34.783 | 2023-05-24T20:33:34.783 | null | null | 149968 | null |
121737 | 2 | null | 120227 | 0 | null | You have a list of keyphrases that you need to extract from documents. You could use NER for this purpose. But looking at the size of the keyphrases (3000) it will be a difficult task because you would have to first annotate the keyphrases in the documents. After that you can train a NER model to make it learn to look for those phrases and extract them.
There are many NER models out there. Start with the SpaCy library first as it gives a simple but effective framework for NER. You can try multiple BERT based models using the SpaCy library. You can use any of the NLP models available on HuggingFace inside SpaCy for NER.
| null | CC BY-SA 4.0 | null | 2023-05-25T05:48:26.473 | 2023-05-25T05:48:26.473 | null | null | 119921 | null |
121738 | 1 | null | null | 1 | 22 | EDIT : If I had to match single worded phrases, I could first tokenize the text from the document and then calculate the cosine similarity of all the tokens with all the keywords from the `keyword_list`. But the issue is that I might have single worded or multi worded keyphrases present in the `keyword_list`. Even if I try to use `ngrams`, how would I know what length of `ngrams` to use?
I have searched and read many articles/questions regarding this but could not find a solution.
Problem Statement : I am trying to extract similar keywords/phrases from a document, based on a pre-curated list of keywords/phrases.
For example below is the list:
```
keyword_list = ['your work', 'ongoing operations', 'completed operations', 'your name', 'bodily injury', 'property damage',
'to the extent permitted by law', 'is required by a contract or agreement']
```
I also have the text I extracted from the documents using OCR. Let's say the text is as below:
```
text = "In light of your ongoing operations, your name is an approximation of your working models. The contract requires that the damage done to the property must be borne by both the parties, as permitted by the law."
```
Now I want to extract all the keywords/phrases that occur in the keyword_list. In addition to that I also want to extract similar keyphrases (by similar I mean similar in context or meaning but worded differently). So the logic/model should be able to extract the following terms:
```
output = ["ongoing operations", "your name", "your working", "The contract requires", "damage done to the property", "as permitted by the law"]
```
We can see that `ongoing operations` and `your name` are present in the `keyword_list` and hence are extracted.
But `your working`, `The contract requires`, `damage done to the property`, `as permitted by the law` are also extracted because they have the same meaning/context to `your work`, `is required by a contract or agreement`, `property damage`, `to the extent permitted by law`.
For the phrases matching completely (`ongoing operations` and `your name`), I have written a logic which uses regex to match the phrases. But for the phrases which have the same meaning/context but worded differently, I am unsure how to proceed. I think a Machine learning or Deep learning approach would be suitable here but I don't know which exact approach!
Any help is appreciated!
| Extract phrases/keywords that are SIMILAR to a python list of keyword/phrases, from a document | CC BY-SA 4.0 | null | 2023-05-25T07:08:58.643 | 2023-05-27T04:36:49.203 | 2023-05-27T04:36:49.203 | 119921 | 119921 | [
"machine-learning",
"python",
"deep-learning",
"nlp"
] |
121739 | 2 | null | 121734 | 0 | null | The performance of the model after re-training it on a new dataset will depend on the similarity between the old and new datasets like you mentioned. If the new dataset is significantly different from the original dataset used to train the pre-trained model, the performance of the model may be affected significantly.
If the model is generating a very different distribution after re-training, there are a few things you can try to fix this issue. First, you can try adjusting the hyperparameters of the model, such as the learning rate or batch size, to see if this improves the generated distribution. Additionally, you can try training the model for more epochs than 5 to allow it to better adapt to the new dataset. If the generated distribution is still unstable, you can try using more advanced techniques such as adversarial training or adding regularization to the model to improve its stability.
| null | CC BY-SA 4.0 | null | 2023-05-25T07:57:08.877 | 2023-05-25T07:57:08.877 | null | null | 149968 | null |
121740 | 2 | null | 121730 | 0 | null | Your instinct is right, you will probably need to use a fixed effects model to examine the relationship between personality traits and feedback outcomes, and a multilevel model to examine the moderating effect of virtual working on extraversion.
For the fixed effects model, you can use logistic regression to predict whether someone will give feedback at work and ordinal regression to predict the type of feedback they will provide (positive or negative). You can include the five personality traits as predictors in the model.
For the multilevel model, you can use a linear mixed-effects model to examine the moderating effect of virtual working on extraversion. You can include virtual working as a time-varying predictor and extraversion as a non-time varying predictor. You can also include random intercepts and slopes for each participant to account for the repeated measures design.
You can use the `lme4` package in R to run the multilevel model. You can also use visualization techniques to examine the distribution of the generated data and evaluate the model's performance using metrics. If the generated distribution is not satisfactory, you can adjust hyperparameters or use advanced techniques like adversarial training to improve the model's performance.
| null | CC BY-SA 4.0 | null | 2023-05-25T08:00:15.053 | 2023-05-25T08:00:15.053 | null | null | 149968 | null |
121741 | 2 | null | 121722 | 0 | null | When labeling anomalies in images, it's important to be consistent and clear in your approach. In the case of holes in a surface, you have a few options for labeling. One approach, like you mentioned, is to label each individual hole with its own bounding box which approach allows for more precise detection of each anomaly and can be useful if you need to know the location and size of each hole.
Alternatively, you could group smaller anomalies together into one bounding box. This approach may be more efficient and easier to label, but may result in less precise detection of individual anomalies. Ultimately, the approach you choose will depend on your specific use case and the level of precision required for detection.
Do you have any more information about the holes and your end goal? Being more specific or providing examples may help others answer your question.
| null | CC BY-SA 4.0 | null | 2023-05-25T08:04:16.237 | 2023-05-25T08:04:16.237 | null | null | 149968 | null |
121742 | 2 | null | 121715 | 0 | null | The loss gradient of the output value of the discriminator could be calculated as follows:
- For real data, the gradient would be -1/D(x) since we want to maximize the log likelihood of D(x) and hence we move in the direction that minimizes D(x).
- For generated data, the gradient could be 1/(1 - D(G(z))) since we want to maximize the log likelihood of 1 - D(G(z)) and hence we move in the direction that maximizes 1 - D(G(z)).
The loss gradient of the generator output could be calculated as follows:
- The gradient could be 1/D(G(z)) since we want to maximize the log likelihood of D(G(z)) and hence we move in the direction that maximizes D(G(z)).
| null | CC BY-SA 4.0 | null | 2023-05-25T08:08:19.497 | 2023-05-25T08:08:19.497 | null | null | 149968 | null |
121743 | 2 | null | 121681 | 0 | null | It sounds like you've tried many different approaches and have a good understanding of the problem you're trying to solve. One thing that stands out to me is the lack of node features beyond the feature of interest and the query target. You mentioned that you don't believe this should interfere with the simple objective, but it's possible that more features could help the model differentiate nodes better and make more meaningful predictions.
Although you have tried different loss functions already an idea worth trying would be with a different loss function that is more tailored to your goal of emitting a subset of nodes whose features sum to the target query. One such loss function could be a modified version of the binary cross-entropy that penalizes the model for emitting nodes whose features do not sum to the target query, rather than just penalizing incorrect binary predictions.
You could also try experimenting with different hyperparameters, such as the number of layers, hidden size, and activation functions. It's possible that a deeper or wider architecture could help the model learn more complex relationships between nodes and more accurately predict the target subgraph.
Lastly, with your large dataset, you could try using a subset of the data for training and a different subset for validation to ensure that the model is not just overfitting to a small set of graphs. It's also possible that the sparsity of the target node masks is making it difficult for the model to learn, so you could try generating synthetic masks with a higher density to see if that improves performance.
| null | CC BY-SA 4.0 | null | 2023-05-25T08:24:44.740 | 2023-05-25T08:24:44.740 | null | null | 149968 | null |
121745 | 1 | 121747 | null | 0 | 33 | The task is to predict sentiment from 1 to 10 based on Russian reviews. The training data size is 20000 records, of which 1000 were preserved as a validation set. The preprocessing steps included punctuation removal, digit removal, Latin character removal, stopword removal, and lemmatization. Since the data was imbalanced, I decided to downsample it. After that, TF-IDF vectorization was applied. At the end, I got this training dataset:
[](https://i.stack.imgur.com/EWo0m.png)
The next step was the validation set TF-IDF transformation:
[](https://i.stack.imgur.com/HAv7x.png)
As a classifier model, I chose MultinomialNB (I read it is useful for text classification tasks and sparse data). The training data fit was pretty quick:
```
# TODO: create a Multinomial Naive Bayes Classificator
clf = MultinomialNB(force_alpha=True)
clf.fit(X_res, y_res.values.ravel())
```
But the problem was in model evaluation part:
```
# TODO: model evaluation
print(clf.score(X_res, y_res.values.ravel()))
print(clf.score(X_val, y_val.values.ravel()))
y_pred = clf.predict(X_val)
print(precision_recall_fscore_support(y_val, y_pred, average='macro'))
```
Output:
```
0.9352409638554217
0.222
(0.17081898127154763, 0.1893033502842826, 0.16303596541199034, None)
```
It is obvious that the model is overfitting, but what do I do? I tried to use SVC, KNeighborsClassifier, DecisionTreeClassifier, RandomForestClassifier, and GaussianNB, but everything remained the same. I tried to play around with the MultinomialNB hyperparameter `alpha` but `force_alpha=True` option is the best so far.
| Why my sentiment analysis model is overfitting? | CC BY-SA 4.0 | null | 2023-05-25T10:00:44.040 | 2023-05-26T06:06:23.947 | 2023-05-26T06:06:23.947 | 150186 | 150186 | [
"classification",
"nlp",
"text-classification",
"sentiment-analysis",
"tfidf"
] |
121746 | 1 | null | null | 4 | 248 | I have an equation given by:
$$
\frac{\mathrm{d} s}{\mathrm{d} t}=4a−2s+\lambda(s)
$$
where, $a$ is an input constant and $\lambda$ is a non-linear term that depends on $s$.
I know that the true solution for $\lambda\left(s\right)$ is $\sin \left( s\right) \cos\left(s\right)$
I have generated data from [0, 5]s with the true solution and with the equation excluding the non-linear term for a range of initial conditions and a range of $a$'s.
```
import numpy as np
import pandas as pd
from scipy.integrate import solve_ivp
def foo_true(t, s, a):
ds_dt = 4*a -2*s + np.sin(s)*np.cos(s)
return ds_dt
def foo(t, s, a):
ds_dt = 4*a -2*s
return ds_dt
# Settings:
t = np.linspace(0, 5, 1000)
a_s = np.arange(1, 10)
s0_s = np.arange(1, 10)
# Store the data
df_true = pd.DataFrame({'time': t})
df = pd.DataFrame({'time': t})
# Generate the data
for a in a_s:
for s0 in s0_s:
sol_true = solve_ivp(foo_true, (t[0], t[-1]), (s0,), t_eval=t, args=(a,))
df_true[f'{a}_{s0}'] = sol_true.y.T
for a in a_s:
for s0 in s0_s:
sol = solve_ivp(foo, (t[0], t[-1]), (s0,), t_eval=t, args=(a,))
df[f'{a}_{s0}'] = sol.y.T
```
In the above code, `df_true` is a data frame containing the actual dynamics of the system and `df` is the dynamics with a discrepancy.
The figure below shows an example of the discrepancy in the data:
[](https://i.stack.imgur.com/0sQW7.png)
Given that I know part of the physics, how can I model $\lambda$ with `pyTorch`?
Can I look into a paper/repo with a conceptually similar example?
| Modeling uncertainty from known physics | CC BY-SA 4.0 | null | 2023-05-25T10:14:14.173 | 2023-05-26T08:35:23.517 | 2023-05-26T08:35:23.517 | 83275 | 149991 | [
"deep-learning",
"neural-network",
"time-series",
"machine-learning-model",
"rnn"
] |
121747 | 2 | null | 121745 | 1 | null | There might be multiple reasons that might be the reason for overfitting some of which are:
1.) Scaling the data
2.) You have not mentioned which parameter values you have selected in the Tfidf vectorizer. Some of them might help to reduce overfitting. `ngram_range` and `max_features` are 2 which you can play around with.
3.) Make sure you are using `fit_transform` on the train set only and not on the test set for both tfidf and scaling. Use only `transform` for the test set.
4.) Try to tune the hyperparameters of other models such as `RandomForest` and `SVC`.
5.) Use other word embedding techniques such as `Word2Vec`, `Glove` or `Fasttext` as they capture the word context as well as opposed to just the word frequency (which is happening in the case of tfidf).
6.) Try different models. You are just testing 4-5 models when in fact there are so many classification models out there. Try as many as you can to see which one gives the best result.
7.) Last but not the least,increase the data size. Since you are down sampling the data (I don't know by how much), this also might be a factor in overfitting.
Try to implement all of the above points and let me know whether results improve.
Cheers!
| null | CC BY-SA 4.0 | null | 2023-05-25T10:47:35.487 | 2023-05-25T10:47:35.487 | null | null | 119921 | null |
121748 | 2 | null | 121746 | 4 | null | Yes. I was searching for the same thing a while back and I came across the concept of PINNs.
Physics-informed neural networks (PINNs) are neural networks that encode model equations, such as partial differential equations (PDEs), as a component of the neural network itself.
[](https://i.stack.imgur.com/m0fXb.gif)
[This article](https://www.dhuality.com/posts/2022-04-20-physical-neural-networks---the-harmonic-oscillator/) describes PINNs for a harmonic oscillator! [Code](https://github.com/benmoseley/harmonic-oscillator-pinn) for the harmonic oscillator problem.
[This link](https://towardsdatascience.com/physics-informed-neural-networks-pinns-an-intuitive-guide-fff138069563) describes the concept in detail with the help of a simple projectile motion example.
Cheers!
| null | CC BY-SA 4.0 | null | 2023-05-25T10:57:25.703 | 2023-05-25T10:57:25.703 | null | null | 119921 | null |
121749 | 1 | null | null | 0 | 28 | Looking for some advice.
I am working on an Anomaly detection problem, I am looking at parcels being transported from A-B and want to identify which parcels are considered anomalies for given routes.
My dataset contains millions of records something like the following
|Parcel |From |To |
|------|----|--|
|TOYS |US |Spain |
|TOYS |US |Spain |
|TOYS |US |Spain |
|CARS |US |Spain |
|CARS |US |Spain |
|CARS |US |Spain |
|TOYS |US |JAPAN |
After some googling, I have tried to use Isolation Forest but I seem to be getting random results.
I suspect that this is due to the encoding of my categories as ordinal relationships are being created between the encoded values. Is there a better algo that I should be using or any pointers that you can give?
| Anomaly Detection: Large number of categories | CC BY-SA 4.0 | null | 2023-05-25T10:57:45.377 | 2023-05-26T08:36:28.460 | 2023-05-26T08:36:28.460 | 83275 | 150196 | [
"machine-learning",
"feature-engineering",
"anomaly-detection",
"anomaly"
] |
121750 | 1 | null | null | 0 | 9 | To be clear this question is not how to input missing data, but how to treat an exchange dataset that will not ever have data on weekends and occasionally on market holidays. I'm working with the DeepAREstimator from gluonTS. The Estimator needs the dataset to be inputted in a uniform manner (i.e. no gaps on nights, weekends, holidays, etc...) I have filled out the dataset using interpolation or forward fill but I was wondering what are other methods or even other models can be used in this situation.
| How to deal with systemic gaps in timeseries data | CC BY-SA 4.0 | null | 2023-05-25T11:55:50.783 | 2023-05-25T11:55:50.783 | null | null | 150199 | [
"deep-learning",
"time-series",
"forecasting",
"deepar"
] |
121751 | 1 | null | null | -2 | 41 | for perfoming 10-fold cross-validation during model training to minimize bias.
I have used below attempt
from sklearn import model_selection
X_train, X_test, y_train, y_test = model_selection.train_test_split(
x, y, test_size=0.2, random_state=42)
it is giving NameError Traceback
Cell In[25], line 4
1 from sklearn.model_selection import train_test_split
3 X_train, X_test, y_train, y_test = model_selection.train_test_split(
----> 4 x, y, test_size=0.2, random_state=42)
NameError: name 'x' is not defined
| how do to divide the datasets into 80/20 per cent of training and test sets and 10 foldcross-validation during model training to minimize bias | CC BY-SA 4.0 | null | 2023-05-25T12:38:52.330 | 2023-05-29T22:28:44.167 | 2023-05-29T22:28:44.167 | 150197 | 150197 | [
"machine-learning"
] |
121752 | 1 | null | null | 0 | 15 | I am currently working on a very imbalanced dataset:
- 24 million transactions (rows of data)
- 30,000 fraudulent transactions (0.1% of total transactions)
The dataset is split via Year, into three sets of training, validation and test. I am using XGBoost as the model to predict whether a transaction is fraudulent or not. After tuning some hyperparameters via optuna, I have received such results
Model parameters and loss
```
from sklearn.metrics import accuracy_score, classification_report, precision_score, recall_score, f1_score, roc_auc_score, precision_recall_curve, auc, average_precision_score, ConfusionMatrixDisplay, confusion_matrix
import matplotlib.pyplot as plt
evalset = [(train_X, train_y), (val_X,val_y)]
params = {'lambda': 4.056095667860487, 'alpha': 2.860539790760471, 'colsample_bytree': 0.4, 'subsample': 1, 'learning_rate': 0.03, 'n_estimators': 300, 'max_depth': 44, 'random_state': 42, 'min_child_weight': 27}
model = xgb.XGBClassifier(**params, scale_pos_weight = estimate, tree_method = "gpu_hist")
model.fit(train_X,train_y,verbose = 10, eval_metric='logloss', eval_set=evalset)
```
```
[0] validation_0-logloss:0.66446 validation_1-logloss:0.66450
[10] validation_0-logloss:0.45427 validation_1-logloss:0.45036
[20] validation_0-logloss:0.32225 validation_1-logloss:0.31836
[30] validation_0-logloss:0.23406 validation_1-logloss:0.22862
[40] validation_0-logloss:0.17265 validation_1-logloss:0.16726
[50] validation_0-logloss:0.13003 validation_1-logloss:0.12363
[60] validation_0-logloss:0.09801 validation_1-logloss:0.09230
[70] validation_0-logloss:0.07546 validation_1-logloss:0.06987
[80] validation_0-logloss:0.05857 validation_1-logloss:0.05278
[90] validation_0-logloss:0.04581 validation_1-logloss:0.04001
[100] validation_0-logloss:0.03605 validation_1-logloss:0.03058
[110] validation_0-logloss:0.02911 validation_1-logloss:0.02373
[120] validation_0-logloss:0.02364 validation_1-logloss:0.01859
[130] validation_0-logloss:0.01966 validation_1-logloss:0.01472
[140] validation_0-logloss:0.01624 validation_1-logloss:0.01172
[150] validation_0-logloss:0.01340 validation_1-logloss:0.00927
[160] validation_0-logloss:0.01120 validation_1-logloss:0.00752
[170] validation_0-logloss:0.00959 validation_1-logloss:0.00616
[180] validation_0-logloss:0.00839 validation_1-logloss:0.00515
[190] validation_0-logloss:0.00725 validation_1-logloss:0.00429
[200] validation_0-logloss:0.00647 validation_1-logloss:0.00370
[210] validation_0-logloss:0.00580 validation_1-logloss:0.00324
[220] validation_0-logloss:0.00520 validation_1-logloss:0.00284
[230] validation_0-logloss:0.00468 validation_1-logloss:0.00253
[240] validation_0-logloss:0.00429 validation_1-logloss:0.00226
[250] validation_0-logloss:0.00391 validation_1-logloss:0.00205
[260] validation_0-logloss:0.00362 validation_1-logloss:0.00191
[270] validation_0-logloss:0.00336 validation_1-logloss:0.00180
[280] validation_0-logloss:0.00313 validation_1-logloss:0.00171
[290] validation_0-logloss:0.00291 validation_1-logloss:0.00165
[299] validation_0-logloss:0.00276 validation_1-logloss:0.00161
```
Learning curve
[](https://i.stack.imgur.com/vHfsE.png)
F1 and PR AUC scores
```
F1 Score on Training Data : 0.8489783532267853
F1 Score on Test Data : 0.7865990990990992
PR AUC score on Training Data : 0.9996174980952233
PR AUC score on Test Data : 0.9174896435002448
```
Classification reports of training/testing sets
```
Training report
precision recall f1-score support
0 1.00 1.00 1.00 20579668
1 0.74 1.00 0.85 25179
accuracy 1.00 20604847
macro avg 0.87 1.00 0.92 20604847
weighted avg 1.00 1.00 1.00 20604847
Test report
precision recall f1-score support
0 1.00 1.00 1.00 2058351
1 0.95 0.67 0.79 2087
accuracy 1.00 2060438
macro avg 0.98 0.83 0.89 2060438
weighted avg 1.00 1.00 1.00 2060438
```
Confusion matrices (1st is training set, 2nd is testing set)
[](https://i.stack.imgur.com/Bzthq.png)
[](https://i.stack.imgur.com/nRvhK.png)
I see that my PRAUC of the training dataset is nearly 1 and it has perfect recall score, so I suspect that my model is overfitting. However, when I test these results on a validation set and testing set, the results are not too far off, and still achieve what I believe to be decent scores.
I would love to hear your thoughts on this, and thank you all in advance and I would appreciate any response!
| Model returns near perfect PR-AUC score but other metrics seem fine. Is my model overfitting? | CC BY-SA 4.0 | null | 2023-05-25T14:11:39.823 | 2023-05-25T14:11:39.823 | null | null | 147867 | [
"machine-learning",
"classification",
"class-imbalance",
"overfitting"
] |
121753 | 1 | null | null | 0 | 15 | I want to predict whether the client will renew his/her subscription based on groceries consumption patterns. Suppose an order contain only one type of grocery.
I have a DataFrame containing ratios of values for different types of groceries for each client and the total number of orders. Each ratio represents the number of groceries of a specific type divided by the total number of groceries ordered. However, the reliability of these ratios varies based on the total number of orders.
For example, if a client has only placed one order of a particular type, the ratio for that type will be 100%. However, if another client has placed 97 orders of the same type with 100 orders in total, the ratio would be 0.97%.
|Client ID |Total Orders |Type A Ratio |Type B Ratio |Type C Ratio |
|---------|------------|------------|------------|------------|
|0 |1 |1.00 |0.00 |0.00 |
|1 |100 |0.97 |0.01 |0.02 |
|2 |5 |0.60 |0.20 |0.10 |
|3 |10 |0.30 |0.50 |0.20 |
|4 |50 |0.80 |0.20 |0.00 |
I am training a machine learning model using XGBoost, but I am struggling to capture the relationship between the ratios and the total number of groceries to weight reliability of ratios. It appears that the model is not effectively learning this relational information. Client 1 ratio on type A groceries is more reliable than client 0 but a model appears to only see that the ratio for Client 0 is larger than Client 1.
I would appreciate any suggestions on how to address this issue. How can I incorporate the varying reliability of the ratios into my machine learning model? Are there any techniques or approaches that can help the model learn the importance of different ratios based on their reliability?
Thank you in advance for your assistance!
| How to represent varying reliability of ratios calculations in a dataset? | CC BY-SA 4.0 | null | 2023-05-25T14:17:55.650 | 2023-05-26T14:22:10.280 | 2023-05-26T14:22:10.280 | 149255 | 149255 | [
"decision-trees",
"methodology"
] |
121754 | 2 | null | 121753 | 0 | null | So, your independent variables are:
- total number of customer orders
- ration of specific product to total amount of orders
And you independent variable is:
- how reliable is the ratio (in percent?)
If you want to use machine learning to solve this problem you need a labeled dataset which would provide a sufficient training data, so the algorithm or neural network could capture the pattern. Labeled means, that it already must contain both independent and dependent variables.
If you do not have such a dataset, and instead you only have `total number of customer orders` and `ration of specific product to total amount of orders` than I do not think that a machine learning model can capture a relation, because, well, it does not exist. Total number of orders does not determine the ratios in any way.
Instead, you can just create a mathematical formula which would output a higher reliability number if the total amount of orders is higher and then test if the given results suits your task.
| null | CC BY-SA 4.0 | null | 2023-05-25T14:40:39.167 | 2023-05-25T14:40:39.167 | null | null | 150130 | null |
121755 | 1 | null | null | 0 | 13 | I am currently training an XGBoost model for binary classification. I have fitted and predicted with the model but when I try to get the "gain" type feature importances, the results differ based on what method/function I call to get the importances. The following chunks of code illustrate what I am talking about.
chunk1:
```
# Access the booster object
booster = model_tuned.get_booster()
#get importances
importance = booster.get_score(importance_type='gain')
print("Feature Importance's(gain)", importance)
Output(truncated for space):
Feature Importance's(gain) {'Value_Engage_Bus_Number_cmd_l_121001000_110_0.0+10': 2493.173143016667, 'Value_ADF_Bearing_121322000_170+8': 1444.65723, 'Value_Speed_Brake_Panel_9_Position_71472000_110': 861.343201, 'Value_Total_Cumulative_Flight_Dur__Seconds_92029000_140+8': 428.6793822, 'Value_Hybrid_EW_Velocity_True_80341000_110+10': 353.6710613333334, 'Value_Hybrid_Wind_Speed_80031000_110+7': 468.978516, 'Value_HMU_Torque_Motor_lane_a_22469000_140+1': 222.413793325, 'Value_Low_Limit_Valve__Dead_Band_Offset_Error_200249000_180+4': 331.243164, 'Value_Yaw_Servo_Torque_71671000_110+10': 323.87207, 'Value_Magnetic_Variation_80121000_110+8': 251.5, 'Value_Avionics_Ch_2_Timeout_Status_fdbck_l_121021000_180+7': 4044.4652635, 'Value_Engage_Bus_Number_cmd_l_121001000_110_1.0+10': 976.6852322999999, 'Value_Total_Cumulative_Flight_Dur__Seconds_22029000_140+4': 907.075684, 'Value_Cabin_Altitude_121711000_110+4': 766.206543, 'Value_Engage_Bus_Number_cmd_l_121001000_110_2.0+8': 663.79834, 'Value_N1_Red_Line_Trimmed_lane_b_91449000_160+10': 443.595215, 'Value_RVDT_Position_40539000_140+6': 405.405273, 'Value_N1_Max_Cruise_Rating_123601000_150+5': 188.24823778, 'Value_Arm_1_72040000_110_3.0+7': 1800.49854, 'Value_Stall_Warning_Speed_Ratio_120411000_140+4': 482.21429450000005,
```
chunk 2:
```
from xgboost import plot_importance
plot_importance(model_tuned ,max_num_features=30, title="Feature Importance-Gain",importance_type="gain")
```
Output:
[](https://i.stack.imgur.com/HdYtB.png)
Why are the outputted features different when both methods use the same model and importance type?
| Why am I getting differing "gain" feature importances from XGBoost? | CC BY-SA 4.0 | null | 2023-05-25T16:40:22.430 | 2023-05-27T17:19:50.793 | 2023-05-27T17:19:50.793 | 29169 | 140387 | [
"xgboost",
"feature-importances"
] |
121756 | 1 | null | null | 0 | 32 | I'm working on a text generation task, finetuning a pretrained model based on Huggingface Transformers.
To evaluate the quality of generated text I'm currently using automatic metrics like BLEU, METEOR, ROUGE and CIDEr, and also I'm saving a few samples and seeing if they make sense.
However, I'm having doubts on how to monitor training: e.g in classification, I usually calculate loss and accuracy to see if the training is going well (and otherwise stop it) and to select the best epoch for the model, but in Text Generation I see these additional issues:
- (Causal) text generation is slow, while training can be more parallelized with teacher forcing. Does it make sense to perform validation and generate text after every training epoch?
- Text generation depends on an additional set of hyperparameters which greatly condition the quality of generated text, such as the decoding technique (greedy vs sampling-based or other techniques), number of beams, temperature, maximum length, etc. All these parameters are not actually used during training. I could also find the best combination after training, but how can I monitor the training?
- HuggingFace generation API does not provide the loss during prediction, i.e I cannot generate text and calculate the cross-entropy loss (at least out-of-the-box) during validation. To calculate loss I could either
Create a custom generation procedure which includes loss.
Perform two passes on all data during validation (one with model.generate and one with model.forward )
Both these alternatives are suboptimal and this made me think that it is not common to calculate validation loss in text generation tasks, is it true?
What is the common way to monitor training of text generation models during finetuning?
| How to perform validation on text generation models? | CC BY-SA 4.0 | null | 2023-05-25T16:46:55.967 | 2023-05-26T08:20:51.420 | 2023-05-26T08:20:51.420 | 144931 | 144931 | [
"deep-learning",
"nlp",
"text-generation",
"huggingface",
"nlg"
] |
121757 | 1 | null | null | 1 | 14 | I have a set of podcast episode transcriptions in Arabic. I wish to convert these to embedding vectors so I can run a similarity comparison of them. Here's the summary statistics on the episodes:
[](https://i.stack.imgur.com/f4MeW.png)
Here's the model I used
[https://huggingface.co/asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic)
So the problem I'm running into is that the initial model I tried only accepts context windows of 512 characters. This means I can't run the whole sequence through it.
I tried chunking the text and then taking the average of the chunk vectors, but this didn't work. It seemed to create noise as all the vectors appeared similar even though their texts were not.
How do people usually handle creating an embedding vector of longer texts?
| How do people usually handle creating an embedding vector of longer texts (32000 characters? | CC BY-SA 4.0 | null | 2023-05-25T17:28:00.233 | 2023-05-25T20:16:49.437 | null | null | 5301 | [
"python",
"nlp",
"text"
] |
121758 | 1 | null | null | 0 | 10 | I stared using MCC(Matthew's correlation coefficient) metric. But getting unexpected values when the given target and pred contains only one case either positive or negative (case - 1), The output of BinaryMatthewsCorrCoef is always 0. But we can clearly say that predictions are same as the targets. And for the case - 2, If the preds and targets are completely different, the MCC is also zero. Is this expected behaviour with MCC or some other issue.
```
mcc = BinaryMatthewsCorrCoef()
# preds and targets are equal (case - 1)
print(mcc(torch.ones(10), torch.ones(10)))
print(mcc(torch.zeros(10), torch.zeros(10)))
# preds and targets are completely different (case - 2)
print(mcc(torch.ones(10), torch.zeros(10)))
```
```
tensor(0)
tensor(0)
tensor(0)
```
In this [blog](https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-019-6413-7), They are handling these cases slightly different. Below are the MCC values using their [implementation](https://gist.github.com/lokesh1199/3d899b03e37107565cd91d6ad6d8db90). These values represent model performance more accurately than the torchmetrics version.
case - 1 -> MCC = 1.0
case - 2 -> MCC = -1.0
Kindly tell if this should be handled differently?
| torchmetrics BinaryMatthewsCorrCoef outputs 0 if target and prediction contains only one case either positive or negative case | CC BY-SA 4.0 | null | 2023-05-25T17:46:50.983 | 2023-05-25T17:46:50.983 | null | null | 150213 | [
"deep-learning",
"pytorch",
"model-evaluations",
"metric"
] |
121759 | 1 | 121763 | null | 0 | 19 | From what I understand my code is telling me that my base model is performing at 96% on it's training data, 55% on it's test data.
And my SMOTE model is performing at ~96% on both.
From my understanding, the SMOTE model performing 96% on it's test data implies that on any new data it is given, it should perform at around 96%. However when I introduce a brand new dataset of identical data from a different time period, it's performing significantly worse.
Is anyone able to tell me if there's something I've missed/overlooked with the code below? If not, I know to look into the new dataset I've added to look for problems.
My only possible lead at the moment is that I've used the SKLearn.preprocessing OrdinalEncoder for both the main & brand new dataset to turn continuous non-integer codes into integers, which I wonder may be causing a mis-match between datasets.
I've attached the code for the main model below.
```
df = df.filter(["Feature1","Feature2","Feature3","Feature4",
"Feature5","Feature6","Feature7",
"Feature8","TargetClassification"])
y = df["TargetClassification"].values
X = df.drop("TargetClassification",axis=1)
sm = SMOTE(random_state=42)
X_sm, y_sm = sm.fit_resample(X,y)
XB_train, XB_test, yB_train, yB_test = train_test_split(X,y,train_size=0.7)
XS_train, XS_test, yS_train, yS_test = train_test_split(X_sm,y_sm,train_size=0.7)
my_SMOTE_model = RandomForestClassifier(n_estimators=100,criterion="gini",random_state=1,max_features=4)
my_BASE_model = RandomForestClassifier(n_estimators=100,criterion="gini",random_state=1,max_features=4)
my_BASE_model.fit(XB_train,yB_train)
y_pred = my_BASE_model.predict(X)
BASE_train_acc = round(my_BASE_model.score(XB_train, yB_train)*100,2)
print(f"Base model training accuracy: {BASE_train_acc}")
my_SMOTE_model.fit(X_sm,y_sm)
y_sm_pred = my_SMOTE_model.predict(X_sm)
SMOTE_train_acc = round(my_SMOTE_model.score(XS_train,yS_train)*100,2)
print(f"SMOTE model training accuracy: {SMOTE_train_acc}")
# Prints Base as 96.05, SMOTE as 96.38
yB_test_prediction = my_BASE_model.predict(XB_test)
yS_test_prediction = my_SMOTE_model.predict(XS_test)
BASE_test_acc = accuracy_score(yB_test,yB_test_prediction)
SMOTE_test_acc = accuracy_score(yS_test,yS_test_prediction)
print(f"Base model test accuracy: {BASE_test_acc}")
print(f"SMOTE model test accuracy: {SMOTE_test_acc}")
#Prints Base as 54.9%, SMOTE as 96.5%
```
Thank you for any help
| Struggling with understanding RandomForest model with SMOTE | CC BY-SA 4.0 | null | 2023-05-25T18:17:03.183 | 2023-05-27T00:08:56.613 | 2023-05-25T19:46:22.587 | 149919 | 149919 | [
"machine-learning",
"python",
"scikit-learn",
"random-forest",
"smote"
] |