Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
121760 | 1 | null | null | 1 | 24 |
## My Problem
I'm struggling with the different definitions of batch size, sequence, sequence length and batch length of a RNN and how to use it in the correct way.
## First things first - let's clarify the definitions
Consider the following data with two features and two labels.
|timestamp |feature 1 |feature 2 |label 1 |label 2 |
|---------|---------|---------|-------|-------|
|t1 |f1.1 |f2.1 |l1.1 |l2.1 |
|t2 |f1.2 |f2.2 |l1.2 |l2.2 |
|t3 |f1.3 |f2.3 |l1.3 |l2.3 |
|t4 |f1.4 |f2.4 |l1.4 |l2.4 |
|t5 |f1.5 |f2.5 |l1.5 |l2.5 |
|t6 |f1.6 |f2.6 |l1.6 |l2.6 |
|... |... |... |... |... |
Let us assume the system which I want to train in the RNN process always the current and the two previous time stamps. The following examples of definitions refer to this framework.
Definition Training Example:
A training is setup of trainings data which is processed by the neural network at once.
Example: `[f1.1, f2.1]`
Definition Sequence:
A Sequence is a setup of several trainings data which are processed in a row to the network.
Example:
```
[[f1.1, f2.1],
[f1.2, f2.2],
[f1.3, f2.3]]
```
Definition Sequence number:
The number of training examples which needed to be processed as sequence by the RNN is called sequence number.
Example: `3`
Definition Batch Size:
The batch size is the number of sequences which are forward to the RNN before the gradients are calculated.
Example:
```
[[[f1.1, f2.1],
[f1.2, f2.2],
[f1.3, f2.3]],
[[f1.2, f2.2],
[f1.3, f2.3],
[f1.4, f2.4]],
[[f1.3, f2.3],
[f1.4, f2.4],
[f1.5, f2.5]]
[[f1.4, f2.4],
[f1.5, f2.5],
[f1.6, f2.6]]
]
```
Definition Batch Length:
The total number of batches are the batch length.
Example: `1` in the previous example.
Definition Data Length:
The total number training examples is calculated by the batch length times batch size times sequence number.
Example: `3 * 4 * 1` of the previous examples.
## Second - implementation with pyTorch
For the implementation of my RNN I use pyTorch with the following code as an example. However, if my previous definition are right, I'm unable to transfer them to the code. I have always errors with the tensor dimensions.
```
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size, sequence_length):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.batch_size = sequence_length
self.output_size = output_size
self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size * sequence_length, output_size)
def forward(self, x):
hidden_state = torch.zeros(x.size(0), self.num_layers, self.hidden_size).to(device)
out, _ = self.rnn(x, hidden_state)
out = out.reshape(out.shape[0], -1)
out = self.fc(out)
return out
```
## Questions
- Are the definitions correct?
- How should the hidden_state initialized correctly by considering the batch size, batch number, sequence, sequence number, hidden size and hidden layer size?
- What shape should be x in the forward method, assuming x represents the complete or parts of the previous data example.
Please help me to solve the puzzle, in the best case with an example for x and the hidden_state based on my example.
Many thanks.
| Understanding batch size, sequence, sequence length and batch length of a RNN | CC BY-SA 4.0 | null | 2023-05-25T18:26:04.610 | 2023-06-02T11:13:31.630 | null | null | 150212 | [
"rnn",
"pytorch"
] |
121761 | 2 | null | 121757 | 1 | null | When dealing with longer texts, you can use a technique called "sliding window" to break the text into smaller segments. This involves taking a window of fixed size and sliding it along the text, one segment at a time. You can then concatenate the vectors of the individual segments together to form a single vector for the whole text.
Another approach is to use a hierarchical model that first encodes the text into sentence-level embeddings, and then aggregates those embeddings into a single document-level embedding.
You can also try using a transformer model that is specifically designed to handle longer sequences, such as the Longformer or the BigBird. These models are able to process sequences of up to tens of thousands of tokens, allowing you to encode entire documents in one go.
| null | CC BY-SA 4.0 | null | 2023-05-25T20:16:49.437 | 2023-05-25T20:16:49.437 | null | null | 149968 | null |
121762 | 1 | null | null | 0 | 4 | I have a custom model class that calls `mlflow.llm.log_predictions` in the `predict` method like so:
```
class Model:
...
def predict(input) -> Output:
...
mlflow.llm.log_predictions(...)
...
```
I'm using that `predict` method in two different contexts:
- After training, before serializing the model - in order to do cross-validation.
- After serializing - to serve "production" predictions.
In the first context (cross-val), I wrap all the code in `with mlflow.start_run():` in order to log data into MLFlow.
In the second context (serving), I'd like to suppress all the MLFlow calls and not log anything.
Moving `mlflow.llm.log_predictions` out of `predict` won't work for me, as the LLM I'm using is a submodel in the overall model, so I don't exactly have access to the LLM inputs/outputs outside of `predict`.
| How to prevent MLFlow from automatically creating a run when a logging function is called? | CC BY-SA 4.0 | null | 2023-05-25T20:54:18.450 | 2023-05-25T20:54:18.450 | null | null | 150071 | [
"mlflow"
] |
121763 | 2 | null | 121759 | 1 | null | Using different ordinal encoders [is certainly not good](https://stackoverflow.com/q/48692500/10495893), but you've also made the error of applying SMOTE before the train-test split ([[1]](https://datascience.stackexchange.com/q/15630/55122), [[2]](https://datascience.stackexchange.com/q/104428/55122)), making the test score optimistically biased. Also, [accuracy is not a great metric](https://stats.stackexchange.com/q/312780/232706), especially in imbalanced settings. Finally, "identical data from a different time period" may well display significantly different relationship between the independent and dependent variable, so some degradation is not unexpected.
| null | CC BY-SA 4.0 | null | 2023-05-25T22:05:32.953 | 2023-05-27T00:08:56.613 | 2023-05-27T00:08:56.613 | 55122 | 55122 | null |
121764 | 1 | null | null | 0 | 25 | What statistical test or methodology should I use for comparing two random forest models, where a different set of variables is made available for each model? I need a test where a power analysis can be used to justify a sample size. I'm considering using a paired t-test (see below).
More specific information follows.
I am creating a virtual species distribution models (SDM) from simulated data. The data used is from a stratified random sampling from a 100x100 grid. The same sampled data is used to compute two different random forest models. One random forest model uses optimal variables, as they use the scale used to generate the virtual data. Scale: how many cells around a given cell are averaged to determine the value of an environmental variable, where the averaged value is used in determining whether a grid point is a presence or absence. The other random forest model has versions of the variables available that were measured at additional scales (including the correct/optimal one). Each environmental variable is one layer of the grid, and is generated using functions available in a virtual SDM modeling package. A species is present or absent based on the combined values of the environmental variables at a grid location, based on a defined suitability function.
I'm considering using a paired t-test, and doing only only sampling for each grid, and computing only one pair of random forest models for each sampling. The power test would tell me, I believe, how many grids I need to generate. The R function pwr.t.test computes the power of a paired t-test. You give pwr.t.test all but one (any one) of the following values, and it gives you the one you left out: sample size, effect size, alpha (p-value), and power. I'm considering using AUC-ROC as the metric.
Is there a better comparison methodology? I need to be able to use a power analysis to justify a sample size.
| What statistical test should I use to compare two random forest models, where each model has a different set of variables available? | CC BY-SA 4.0 | null | 2023-05-25T22:18:57.970 | 2023-05-25T22:25:14.440 | 2023-05-25T22:25:14.440 | 146059 | 146059 | [
"machine-learning",
"random-forest",
"hypothesis-testing",
"simulation"
] |
121765 | 1 | null | null | 0 | 28 | How do you feed data from a data warehouse to Python for ad-hoc analysis?
My day-to-day work is to answer ad-hoc questions, and 95% of the data I need is in our data warehouse. I often query data from our warehouse to CSV file(s), then use Python to load these files with other sources to analyze.
I am going to work with a very gigantic data warehouse that may not be possible to use the same csv method.
Our data warehouse is in Redshift.
What is your experience to feed python/R for data analysis?
| Data Analysis Process | CC BY-SA 4.0 | null | 2023-05-25T22:28:41.303 | 2023-05-27T04:50:12.927 | 2023-05-25T22:29:30.170 | 150218 | 150218 | [
"python",
"data-mining",
"data-analysis",
"etl",
"relational-dbms"
] |
121766 | 2 | null | 121746 | 2 | null | Modeling the non-linear term $λ(s)$ in your equation using PyTorch can be approached as a parameter estimation or function approximation problem. Since you know the true solution for $λ(s)$, you can use the generated data to train a neural network to approximate the non-linear term based on the given input and output values.
To model $λ(s)$ using PyTorch, you can treat it as a learnable parameter in a neural network and train the network to approximate the true $λ(s) = sin(s) * cos(s)$ based on the given data. Instead of using a single input $s$, you can use a concatenated input of $s$ and the corresponding solution values. This way, the model can learn the relationship between $λ(s)$ and the given solutions.
Here's an example of how you can implement this using PyTorch:
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset
# Define a custom dataset for the data
class CustomDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
# Define a neural network to approximate λ(s)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = nn.Linear(2, 1) # Input dimension is 2 (s and solution)
def forward(self, x):
return torch.sin(x[:, 0]) * torch.cos(x[:, 0]) + self.linear(x)
# Prepare the data
data = df.values # Include all columns
dataset = CustomDataset(data)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# Create the model and optimizer
model = Model()
optimizer = optim.Adam(model.parameters(), lr=0.001)
criterion = nn.MSELoss()
# Training loop
num_epochs = 100
for epoch in range(num_epochs):
for batch in dataloader:
optimizer.zero_grad()
inputs = batch[:, :-1] # Input data (exclude the last column)
targets = batch[:, -1] # True λ(s) values (last column)
outputs = model(inputs)
loss = criterion(outputs.squeeze(), targets)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}/{num_epochs}, Loss: {loss.item()}")
# Test the model
test_input = torch.tensor([[1.0, 0.0]]) # An example input (s=1.0, solution=0.0)
predicted_output = model(test_input)
print(f"Predicted λ(s) for input s=1.0: {predicted_output.item()}")
# Continue with your desired analysis or visualization using the predicted lambda values
```
This code defines a custom dataset class, a neural network model with a linear layer, sets up the optimizer and loss function, and trains the model using the provided data. The model approximates the true $λ(s)$ by combining the learned linear term with the sin(s)cos(s) term. The input dimension of the model is now 2, consisting of s and the corresponding solution value. The model's linear layer is adjusted accordingly to accommodate the concatenated input.
---
Regarding papers or repositories with similar examples, here are a few resources you can explore for inspiration:
- Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations by Raissi et al
This paper presents a physics-informed neural network approach for solving differential equations. It combines physical laws with data to train neural networks to approximate the unknown terms in the equations.
- Neural Ordinary Differential Equations by Chen et al.
This paper introduces Neural Ordinary Differential Equations (NODEs), which use continuous-depth models to approximate the solution of differential equations. It provides insights into how neural networks can be used to represent differential equations.
- Physics-Informed Neural Networks (PINNs): An Introduction and Recent Advances by Zhang et al.
This review paper introduces Physics-Informed Neural Networks (PINNs) and their applications in solving various physical problems. It discusses the integration of physics-based knowledge into neural networks.
You can explore the code and methodologies presented in these papers to gain a deeper understanding of modeling non-linear terms in physics using deep learning approaches.
- DeepXDE: A Deep Learning Library for Solving Differential Equations by Lu, Weinan, and Zhongqiang Zhang.
GitHub repository: deepxde
GitHub repository: PINNs
These references provide examples and implementations of neural network-based approaches for solving differential equations and may serve as a starting point for your work on modelling $λ(s)$ with PyTorch.
Remember that modelling $λ(s)$ using PyTorch is an iterative process that may require fine-tuning the network architecture, hyperparameters, and training procedure to achieve the best results for your specific problem.
| null | CC BY-SA 4.0 | null | 2023-05-26T07:40:05.013 | 2023-05-26T07:40:05.013 | null | null | 83275 | null |
121767 | 2 | null | 121749 | 0 | null | When dealing with anomaly detection in a dataset with a large number of categories, it's important to consider the nature of your data and choose an appropriate algorithm. Isolation Forest is a popular choice for anomaly detection, but its performance can vary depending on the characteristics of your dataset, also traditional algorithms may not perform well due to the ordinal encoding issue you mentioned. In such cases, you can consider using more advanced techniques that can handle categorical variables effectively.
In your case, where you have categorical features such as "Parcel," "From," and "To," encoding them using ordinal relationships may not capture the true nature of the data. Instead, you should consider using one-hot encoding, which creates binary features for each category. This approach allows the algorithm to consider the categorical features individually rather than assuming a specific ordering.
Apart from Isolation Forest & OHE, you might also consider other algorithms that are suitable for anomaly detection in high-dimensional data, such as:
- Entity Embeddings: Instead of one-hot encoding, you can use entity embeddings to represent your categorical variables. Entity embeddings are low-dimensional vector representations that capture the semantic relationships between categories. By training a neural network to learn these embeddings, you can create meaningful representations of your categorical features. You can then feed these embeddings into an anomaly detection algorithm such as an autoencoder or an outlier detection model like the Local Outlier Factor (LOF).
- Supervised Anomaly Detection: If you have labeled anomalies in your dataset, you can use supervised anomaly detection techniques. In this approach, you train a model to classify parcels as normal or anomalous based on the given routes. Techniques like Support Vector Machines (SVMs), Random Forests, or Gradient Boosting models can be trained on labeled data to identify anomalies based on the patterns observed in the features.
- Deep Autoencoders: Deep autoencoders are neural networks that are trained to reconstruct their input. By encoding the input into a lower-dimensional representation and then decoding it back to the original space, the autoencoder learns the underlying patterns and structures in the data. Anomalies can be identified based on the reconstruction error, where higher errors indicate samples that deviate significantly from the learned patterns.
- Gaussian Mixture Models (GMM): GMM assumes that the data is generated from a mixture of Gaussian distributions. Anomalies can be identified based on low likelihood values.
| null | CC BY-SA 4.0 | null | 2023-05-26T07:54:49.973 | 2023-05-26T07:54:49.973 | null | null | 83275 | null |
121768 | 1 | null | null | -2 | 9 | Good afternoon, colleagues!
We are now preparing to launch a new training program, which will be designed for managers whose competencies will include managing big data. The problem is that the courses for them should be without mathematics and programming, because the target group of applicants are humanities.
After some thought, we came to the conclusion that it makes sense for us to develop four subjects: data science, decision support systems, data mining, and big data management infrastructure. We already had work on all of these subjects, but there's a lot of math involved.
Basically, one could focus on what a manager can use (including some useful applied tools), how it all works in general terms, and the methods of rational thinking, critical thinking and decision-making (including those based on big data, the modern philosophy of causality - by Judah Perle, for example).
What experience do you have in teaching these kinds of programs, what advice do you have for this kind of audience? Perhaps there are some recognized courses already that we don't know about so we don't have to reinvent the wheel?
| Big Data Analytics for Policy Making with No Math inside | CC BY-SA 4.0 | null | 2023-05-26T08:40:54.770 | 2023-05-26T08:40:54.770 | null | null | 63051 | [
"data-mining",
"bigdata",
"management"
] |
121769 | 1 | null | null | 1 | 14 | I am working on an automated approach to object-based data augmentation. The goal of the approach would be to add a selected object to an existing image. To automate this task, information is needed about the size of the object to be added so that it is coherent with the rest of the scene. I've been looking for works that tackled this issue, but couldn't find any.
I would be grateful if you could share references and possible approaches...
Thanks in advance
| Which approach to determine the size of an object to be placed given the sizes of the existing objects in a scene? | CC BY-SA 4.0 | null | 2023-05-26T08:49:37.447 | 2023-05-26T08:49:37.447 | null | null | 150226 | [
"generative-models",
"image",
"object-recognition"
] |
121770 | 1 | null | null | 0 | 23 | I'm trying to create a plot for the ranking of each country from 2002 to 2023. I created this dataset by loading each csv file from the respective years, which contains the Countries and Ranking columns, and combining those individual datasets using the Countries column.
Now in this combined dataset, I want to plot a particular country, which shows every rank from 2002 to 2023, using Python. And also, I want to remove decimals from every column; decimals appeared when I used the merge func. to merge all datasets.
Thank You
![enter image description here](https://i.stack.imgur.com/Rje3B.jpg)
| How to create a plot of specific row with every column using python and which package to choose matplotlib or seaborn? | CC BY-SA 4.0 | null | 2023-05-26T08:58:21.900 | 2023-05-26T10:07:51.967 | 2023-05-26T10:07:51.967 | 83275 | 150227 | [
"dataset",
"pandas",
"visualization"
] |
121771 | 2 | null | 121770 | 0 | null | Matplotlib vs Seaborn is a question you will probably get mixed opinions on. I personally use matplotlib because I am most familiar and can do more with it. I think - and take this with a grain of salt - that seaborn is a bit easier to use as a beginner but has more limitations than matplotlib.
For your plotting problem, you should first read up on how to access data in a pandas dataframe:
[https://pandas.pydata.org/docs/user_guide/indexing.html](https://pandas.pydata.org/docs/user_guide/indexing.html)
When you have gathered your data, you can start plotting it:
[https://seaborn.pydata.org/tutorial/introduction.html](https://seaborn.pydata.org/tutorial/introduction.html)
[https://matplotlib.org/stable/tutorials/introductory/pyplot.html](https://matplotlib.org/stable/tutorials/introductory/pyplot.html)
If you have any specific questions, feel free to ask. But please try to solve your problem yourself first as everything I just posted is readily available through a simple google-search.
Your decimals probably appeared because when you merged the dataframes pandas automatically converted the numerical data from an `int` type to a `float` type. If you just want to plot the data, that's not an issue for you. If you still want or need to convert them back to `int` or similar, you can find help on that here:
[https://stackoverflow.com/questions/15891038/change-column-type-in-pandas](https://stackoverflow.com/questions/15891038/change-column-type-in-pandas)
| null | CC BY-SA 4.0 | null | 2023-05-26T09:37:29.953 | 2023-05-26T09:37:29.953 | null | null | 141828 | null |
121772 | 2 | null | 121770 | 0 | null | Welcome to datascience.stackexchange Saubhik. First, I'd recommend setting your countries column as the index of the dataframe. Say you start with something like this:
```
`columns = ["Rank2022", "Rank2023", "Countries"]
data = [[1.0, 2.0, "Iceland"], [2.0, 3.0, "Norway"], [3.0, 5.0, "Finland"], [4.0, 6.0, "Sweden"], [5.0, 2.0, "Denmark"]]
df = pd.DataFrame(data, columns=columns)`
```
you can set the index to the country column:
```
df = df.set_index("Countries").
```
Then you can set the type to int:
```
df = df.astype(int)
```
And finally you can plot an individual row with something like:
```
row = df.loc["Norway"]
row.plot()
```
| null | CC BY-SA 4.0 | null | 2023-05-26T09:48:34.153 | 2023-05-26T09:48:34.153 | null | null | 146483 | null |
121773 | 1 | null | null | 0 | 17 | Is there a common name for this window function?
I made it to replace a Hann window used in loading an FFT.
It is basically a wide lobe cosine tapered window, or negative Blackman window. Is there a better more common name?
$y = 0.5 * (1.25 - \cos(\pi * 2 * {x \over N}) - (0.25 * \cos(\pi * 4 * {x \over N})))$
$N$ is the total number of samples.
[](https://i.stack.imgur.com/Xsa0b.png)
| What is the name of this window function? | CC BY-SA 4.0 | null | 2023-05-26T10:52:08.077 | 2023-05-27T17:18:56.893 | 2023-05-27T17:18:56.893 | 29169 | 150236 | [
"mathematics",
"windows",
"functions"
] |
121774 | 2 | null | 121669 | 0 | null | You need to manually annotate a large sample of your input text like this:
```
Irrelevant O
information, O
Adaptable B-Skill
to I-Skill
stuff I-Skill
, O
Leadership B-Skill
skills I-Skill
... O
```
But normally NER is intended for unstructured text. So if you consider that the CSV structure is reliable, then there's no point using NER since you already know which text belongs to which category: everything in the 'skills' columns belongs to SKILLS, everything in 'experience' belongs to EXPERIENCE, etc.
| null | CC BY-SA 4.0 | null | 2023-05-26T11:12:54.497 | 2023-05-26T11:12:54.497 | null | null | 64377 | null |
121775 | 1 | null | null | 1 | 19 | How does one learn a classifier from data that isn't always fully labelled? For example, say one has corrupted data from the CIFAR-10 dataset (which has labels like bird/automobile/ship/truck). Now this corrupted data (X, Y) pairs and preserves X, while "confusing" a large number of Y pairs by replacing each label with a set of labels the sample's true label is from.
So a label "bird" may become "not ship", "automobile" may become "autombile or truck", "ship" may become "ship" (unchanged) etc.
How does one best exploit this information? Is there a loss function that handles these?
| How do I exploit partial labels for classification? | CC BY-SA 4.0 | null | 2023-05-26T11:33:07.940 | 2023-05-26T12:20:34.047 | 2023-05-26T12:20:34.047 | 150238 | 150238 | [
"machine-learning",
"multilabel-classification"
] |
121776 | 1 | null | null | 0 | 15 | I have a Time Series problem, where I am trying to predict a single output at time $t$, $y_t$, given the $2$ previous time steps; $X_{t-2}, X_{t-1}$.
Let's just look at one observation for simplicity.
At a given time step $t$, I have $3$ features and a single output. Let's say $[a_t, b_t, c_t, y_t]$, where $a_t, b_t, c_t$ are my features, and $y_t$ is my output (the value I want to predict).
So, If I want to predict $y_t$ given the previous $2$ timesteps, this would look like
$$[ [a_{t-2}, b_{t-2}, c_{t-2}, y_{t-2}],\\
[a_{t-1}, b_{t-1}, c_{t-1}, y_{t-1}], \\
[a_{t}, b_{t}, c_{t}, ?]]$$
I don't have a value for $y_t$ here, and I need to pass in $4$ features to my $X_t$, so how does this work exactly?
At time $t$, I am again aware of my features $a_t, b_t, c_t$, and I want to predict $y_t$. But if I am only looking at the previous 2 timesteps here, I don't understand how the LSTM knows anything about the features at the current time step?
| Confusion regarding what constitutes a feature in a LSTM? | CC BY-SA 4.0 | null | 2023-05-26T11:47:26.127 | 2023-05-26T11:53:32.053 | 2023-05-26T11:53:32.053 | 150235 | 150235 | [
"machine-learning",
"neural-network",
"lstm",
"rnn"
] |
121777 | 2 | null | 121775 | 0 | null | If the amount of incorrect labels isn't big, in complaisant to correct ones, you can still train your model as usual and then plot a [confusion matrix](https://www.analyticsvidhya.com/blog/2020/04/confusion-matrix-machine-learning/) for different validation sets, which can tell you how many classifications were incorrect and which of them. After that you can decide to either correct the mistakes or dump them from the dataset
Example using fastai and pet breed image classifier (numbers outside of the main diagonal are mistakes):
```
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
```
[](https://i.stack.imgur.com/xGJK5.png)
```
# see the top errors
interp.plot_top_losses(5)
```
| null | CC BY-SA 4.0 | null | 2023-05-26T11:47:49.923 | 2023-05-26T11:47:49.923 | null | null | 150130 | null |
121778 | 1 | null | null | 0 | 8 | I have a question: if I want to use SMOTE-Tomek links for my imbalanced dataset, should I now apply the method before or after the split?
| SMOTE-Tomek links with imbalanced dataset | CC BY-SA 4.0 | null | 2023-05-26T12:36:22.453 | 2023-05-26T12:36:22.453 | null | null | 150240 | [
"data-mining",
"machine-learning-model"
] |
121780 | 2 | null | 121765 | 0 | null | ```
>>> import pandas
#Connect to the cluster
>>> import redshift_connector
>>> conn = redshift_connector.connect(
host='examplecluster.abc123xyz789.us-west-1.redshift.amazonaws.com',
port=5439,
database='dev',
user='awsuser',
password='my_password'
)
# Create a Cursor object
>>> cursor = conn.cursor()
# Query and receive result set
cursor.execute("select * from book")
result: pandas.DataFrame = cursor.fetch_dataframe()
print(result)
```
| null | CC BY-SA 4.0 | null | 2023-05-26T13:46:12.293 | 2023-05-26T13:46:12.293 | null | null | 144743 | null |
121781 | 1 | null | null | 0 | 26 | I am trying to use the sklearn.svm.SVC on a relatively big dataset, 1.5k test/train samples, 512 features each, one sample per class (so, 1.5k classes). I know that SVC doesn't scale well, so at first I tried LinearSVC, but it didn't achieve the required quality. So, I decided to fit an RBF/polynomial kernel, and utilise parallelism as much as possible to reduce computation time. I have attempted the following:
- Use a plain SVC(kernel='rbf'). This approach failed immediately, since calling decision_function somehow required 200Gb of RAM.
- Wrap SVC in a OneVsRest classifier, which significantly reduced the required memory. This works, but it utilizes all 16 CPUs only in the beginning (during fit would be my guess), after which it gets stuck at one worker (according to htop: total workload drops from 1600% to 100%). I believe the culprit is the decision_function.
To achieve method 1 I use
```
model = SVC().fit(X_train, Y_train)
decision_scores = model.decision_function(X_test)
```
For method 2:
```
with joblib.parallel_backend('loky'):
model = OneVsRestClassifier(SVC(**self.kwargs)).fit(gallery_feats, gallery_ids)
decision_scores = model.decision_function(probe_feats)
```
So, my question is, how does one parallelize the decision_function call? Or, alternatively, how to reduce the memory footprint of SVC().decision_function? Any help much appreciated!
| Issues with sklearn.svm.SVC | CC BY-SA 4.0 | null | 2023-05-26T14:00:01.123 | 2023-05-26T14:00:01.123 | null | null | 150242 | [
"python",
"scikit-learn",
"svm"
] |
121782 | 1 | 121787 | null | 2 | 195 | I will first tell you about the context then ask my questions.
The model detects hate speech and the training and testing datasets are imbalanced (NLP).
My questions:
- Is this considered a good model?
- Is the False negative really bad and it indicates that my model will predict a lot of ones to be zeros on new data?
- Is it common for AUC to be higher than the recall and precision when the data is imbalanced?
- Is the ROC-AUC misleading in this case because it depends on the True Negative and it is really big? (FPR depends on TN)
- For my use case, what is the best metric to use?
- I passed the probabilities to create ROC, is that the right way?
[](https://i.stack.imgur.com/59Z1Z.png)
Edit:
I did under-sampling and got the following results from the same model parameters:
[](https://i.stack.imgur.com/kebJh.png)
Does this show that the model is good? or can it be misleading too?
| Some simple questions about confusion matrix and metrics in general | CC BY-SA 4.0 | null | 2023-05-26T16:07:54.427 | 2023-05-27T07:08:33.833 | 2023-05-26T19:48:52.990 | 126059 | 126059 | [
"machine-learning",
"nlp",
"class-imbalance",
"metric",
"confusion-matrix"
] |
121783 | 1 | null | null | 0 | 5 | I have daily time-series data, which tells me the rain fall & foot fall at a certain shop on that day. Now, I want to predict the foot fall at time $t$, given the previous $2$ observations.
As I'm dealing with time-series data, I thought I could use a RNN, feeding in the previous $2$ observations.
Now, I want it to learn the dependency between rainfall & footfall (i.e, if it's raining, there will be less footfall), and I want it to be able to look at previous rainfall values in order to gauge the current rainfall.
Let's just consider one observation for the time being.
Let $r_t$ be the rain value at time step $t$ and $y_t$ be the footfall at time $t$, $y_t$ is what I want to predict.
I thought I could construct an input like:
$$ [[r_{t-2}, y_{t-2}],\\
[r_{t-1}, y_{t-1}]]
$$
in order to predict $y_t$. But, given I'm at timestep $t$ and I know the rainfall $r_t$, it seems like the RNN has no way of accessing this information? If I know it is raining at timestep $t$, then how do I feed the model this?
I have had a look at parallel series but these aren't really what I'm looking for, as I'm using the previous $y_t$ as a feature here essentially.
Is there a way of structuring this to give me what I want?
| RNN Time Series Footfall - How do I construct this RNN? | CC BY-SA 4.0 | null | 2023-05-26T16:18:20.230 | 2023-05-26T16:18:20.230 | null | null | 150235 | [
"neural-network",
"time-series",
"lstm",
"rnn"
] |
121784 | 1 | 121788 | null | 1 | 24 | I have ~78k microscopy images of single cells, where the task is to classify for cancer (binary classifier). The images are labeled according to which patient the data came from. I do the train-val split, making sure no patient has images in both train and validation. I noticed that depending on which patients I put in the validation set (one malignant patient, one benign patient, always perserving 20% validation size and about the same class distribution) I get wildly different validation accuracies.
Below is a plot of a test I did, where I tried all permutations of validation set for each patient with cancer. The dashed lines marks where a new patient with cancer is replaced in the validation set. It seems that it is which patient with cancer I put in the validation set that influences the validation accuracy heavily.
[](https://i.stack.imgur.com/hZXrY.png)
My question is, what does this tell me and are there any popular methods for dealing with similar situations? My thinking is that I should train the model using the split in the dashed group number 3 in the plot, since it has the highest validation accuracy without lowering training accuracy, but then again maybe those results are due to some unknown leak.
EDIT:
It should be noted that the images are labeled according to if they came from a patient with cancer or not, not whether the cell actually is cancerous. Below is an example of what the pictures look like, with very little difference between all images as far as what I can see with my eyes.
[](https://i.stack.imgur.com/bfjvK.jpg)
| Different validation sets give very different results. What can be the reason? | CC BY-SA 4.0 | null | 2023-05-26T20:03:15.270 | 2023-05-27T04:18:21.063 | 2023-05-26T21:05:40.917 | 150250 | 150250 | [
"machine-learning",
"deep-learning",
"image-classification",
"image-preprocessing"
] |
121785 | 1 | null | null | 2 | 12 | I was running a Linear Regression with Wooldridge dataset named GPA2, which is found on Python library named wooldridge.
I tried two linear regressions. The first:
```
results = smf.ols('colgpa ~ hsperc + sat', data=gpa).fit()
```
And the second
```
results = smf.ols('colgpa ~ hsperc + sat - 1', data=gpa).fit()
```
As you can see, there are no major differences between them, I've only removed the intercept from the seconde equation. However, a couple of things changes: (I) the warning of high multicollinearity disapeared when I removed the intercept; (II) The r-squared and adjusted r-squared went both from 0.273 to 0.954; (III) the f-statistic went from 1.77e-287 to 4.284e+04.
Why would this happen only by removing the intercept? Shouldn't them really be pretty similar?
Also, when running a variance inflation factor, I got a pretty high number for the constant. How's that possible?
Thanks
| Why would the result change so much for a linear regression with or without a constant? | CC BY-SA 4.0 | null | 2023-05-26T23:40:43.547 | 2023-05-26T23:40:43.547 | null | null | 125803 | [
"regression",
"linear-regression",
"r-squared"
] |
121786 | 2 | null | 111726 | 0 | null | If I understand your question correctly, then it is because of the order-preserving property of the function. Permutation is for measuring the importance of each element in the set.
In overly simplest terms, Imagine a sentence of words. Rather than directly tokenizing it, you create permutations of words so that you can measure how each word contributes to the uniqueness of the hash value generated. In other words, how informative and representative each word is of the whole set/sentence.
| null | CC BY-SA 4.0 | null | 2023-05-27T03:27:25.050 | 2023-05-27T03:27:25.050 | null | null | 99434 | null |
121787 | 2 | null | 121782 | 4 | null | The first model where the `f1_score` is around 61% can not be considered as a good model. You can achieve much better results than that. This can be seen in the second case (where you have downsampled the dataset), where the `f1_score` increases substantially.
Since your problem statement is to detect hate speech, you would have to decrease both, the FP and the FN or in other words, increase the `precision` and `recall`.
I would the say the metric in this case would be the `f1_score` which is a combination of `precision` and `recall`.
Also instead of downsampling, try oversampling. Or better yet, do neither and instead use other techniques to counteract the imbalance (think cross validation particulary `RepeatedStratifiedCV`, or maybe get more data for the minority class not by oversampling but from the authentic sources. )
| null | CC BY-SA 4.0 | null | 2023-05-27T04:11:23.313 | 2023-05-27T04:11:23.313 | null | null | 119921 | null |
121788 | 2 | null | 121784 | 1 | null | Different validation splits will give different results because the data points will vary. How severe can the change of results be depends on how different the data points are.
One way to reduce this impact is to use `CrossValidation` while training your model. Since you have a case of Binary Classification, you should go for `StratifiedCV`. This helps your model to capture the majority of the diverseness of the dataset.
Also since you mention that the majority of the images are similar (as far as you can tell), you should use `image augmentation` techniques. `Keras` has a helpful library which you can use. This will help your model to become more robust to any diverseness it might encounter when deployed.
These 2 methods will definitely solve your issue!
Cheers!
| null | CC BY-SA 4.0 | null | 2023-05-27T04:18:21.063 | 2023-05-27T04:18:21.063 | null | null | 150059 | null |
121789 | 2 | null | 121738 | 1 | null | You could use [sentence transformer](https://www.sbert.net/) library to calculate the similarity between different phrases. It also works for multi worded tokens.
```
from sentence_transformers import SentenceTransformer, util
import compress_fasttext
import numpy as np
mpnet_v2 = SentenceTransformer('all-mpnet-base-v2')
sentence1 = "property damage"
sentence2 = "damage done to the property"
# encode sentences to get their embeddings
embeb_r_large1 = mpnet_v2.encode(sentence1, convert_to_tensor=True)
# compute similarity scores of two embeddings
mpnetv2_score = util.pytorch_cos_sim(embeb_r_large1, embeb_r_large2)
print(f'similarity score is : {mpnetv2_score}')
#result
#similarity score is : 0.8635872602462769
```
| null | CC BY-SA 4.0 | null | 2023-05-27T04:27:15.097 | 2023-05-27T04:27:15.097 | null | null | 150059 | null |
121790 | 1 | null | null | 0 | 21 | I found that there is no common resource and well defined definition for "Gradient norm", most search results are based on ML experts providing answers which involves gradient norm or papers which reference it and provide a single sentence intro to it.
Is there any well defined resource I can refer to get a concrete understanding of it ? Thank you
| What exactly is Gradient norm? | CC BY-SA 4.0 | null | 2023-05-27T04:35:08.623 | 2023-05-27T09:09:21.717 | null | null | 145273 | [
"machine-learning",
"deep-learning",
"neural-network",
"gradient-descent",
"backpropagation"
] |
121791 | 2 | null | 39261 | 0 | null | You can do this by using OCR engines like `pytesseract`. Once the text has been extracted you can either use custom NLP rules for framing the questions and their answers or use a Question Answering model which will do this for you. There are many such models like `Layoutlm` series. `Huggingface` hosts many such models.
Also do not use `PyPDF2` as it is not that robust when compared to `pytesseract`. I tried it and it only works on certain pdfs.
Cheers!
| null | CC BY-SA 4.0 | null | 2023-05-27T04:40:00.260 | 2023-05-27T04:40:00.260 | null | null | 119921 | null |
121792 | 2 | null | 121765 | 0 | null | Lot's of `BigData` libraries out there. `PySpark` and `Hadoop` are a couple you could use (personally recommend `PySpark` for it's pythonic usage).
If you only want to stick to `pandas` (for some reason), you could sample the data for initial anaylsis. Then increase the sample size gradually in iterations till you memory size is reached. Obviously this technique will require more time and effort!
I would go for the first technique!
Cheers!
| null | CC BY-SA 4.0 | null | 2023-05-27T04:50:12.927 | 2023-05-27T04:50:12.927 | null | null | 119921 | null |
121793 | 1 | null | null | 4 | 223 | In the picture below there are some regions which are very bright (i.e. more white). Some bright regions are wide and some are narrow or thin. The red box covers one such wide bright spot, and blue box covers one thin bright spot. Thin bright spots are called edges and wide bright spots are called hot-spots.
I want to remove all the hot-spots from the image (i.e. make them black), but no edge should be removed.
My question is how to write Python code using OpenCV to remove all hot-spots but no edge?
[](https://i.stack.imgur.com/GwW3u.png)
| How to remove the hotspots from given image by using Python and opencv? | CC BY-SA 4.0 | null | 2023-05-27T06:02:22.867 | 2023-06-03T14:13:59.957 | 2023-06-02T22:43:04.363 | 150257 | 150257 | [
"python",
"data-science-model",
"image-preprocessing",
"opencv",
"image"
] |
121794 | 2 | null | 121782 | 3 | null | Most of your questions cannot be objectively answered.
Whether or not a model is good depends on what is the use for it.
Seeing how your classes are imbalanced, it definitely affects the metrics you presented. Do you care more about False Positives or False Negatives? What are the consequences of this? How many False Negatives are you willing to allow in order to have less False Positives?
>
Is it common for AUC to be higher than the recall and precision when the data is imbalanced?
This is an example of your model not being "as good" (given the caveats I mentioned). High ROC AUC means that your data can be ranked well while varying the threshold, which is to be expected since most of your data belongs in one class. But when considering precision-recall as individual metrics, at least one of those (precision if you have a lot of FP and recall if you have a lot of FN) will be more sensitive to the type of error you have, thus having lower values.
>
For my use case, what is the best metric to use?
F1 score is a pretty solid option whenever there are imbalanced classes, because as I mentioned it punishes both FP and FN.
But, by its definition, it is an average (harmonic mean) between precision and recall. If you care more about reducing a specific type of error, you can focus more on maximizing the more specific metrics (precision/recall).
>
I passed the probabilities to create ROC, is that the right way?
Yes it is. ROC is dependent on classification threshold, thus it needs to know the probability in order to be able to determine where to classify the sample given the specific threshold it checks each time.
| null | CC BY-SA 4.0 | null | 2023-05-27T07:08:33.833 | 2023-05-27T07:08:33.833 | null | null | 79520 | null |
121795 | 1 | null | null | 0 | 8 | I need to build a XAI model to gain interpretability in a MLP trained with the UNSW-NB15 dataset and I'm a bit lost on where to start my code. I'm reading lots of papers about the topic but it doesn't give me any clear idea of where to begin. If you have any advice or recommendations of any useful resources it would be very well received.
Thanks for your help.
| Need to build a XAI model to gain interpretability in a MLP built with Keras | CC BY-SA 4.0 | null | 2023-05-27T09:04:51.903 | 2023-05-27T09:04:51.903 | null | null | 142495 | [
"dataset",
"mlp",
"explainable-ai"
] |
121796 | 2 | null | 121790 | 0 | null | The L2 gradient norm is simply the sum of the squares of the individual gradients. The L1 norm would be the sum of absolute values of the gradients, though this tends to be less common imo. The basic idea is to scale/clip gradients to prevent vanishing/exploding gradients. This is explained in a number of articles online: [https://machinelearningmastery.com/how-to-avoid-exploding-gradients-in-neural-networks-with-gradient-clipping/](https://machinelearningmastery.com/how-to-avoid-exploding-gradients-in-neural-networks-with-gradient-clipping/)
hth.
| null | CC BY-SA 4.0 | null | 2023-05-27T09:09:21.717 | 2023-05-27T09:09:21.717 | null | null | 146483 | null |
121797 | 1 | 121837 | null | 1 | 110 | Suppose Prof. X goes to a road side tea-coffee shop everyday at 5pm just after his office. After reaching there he tosses a coin, and places his order tea or coffee. The shop owner Y has been observing this for one month. By watching some movies he has learnt a bit of probability. Y wants to predict what the professor will order everyday.
I have 3 questions which I need to solve:
(i) Please build a mathematical model for Y. Precisely describe and justify.
(ii) Derive a solution for that model if required, and then
(iii) write an algorithm how Y can predict what X will order.
| How to predict what someone will order? | CC BY-SA 4.0 | null | 2023-05-27T11:35:39.607 | 2023-05-29T16:49:26.020 | null | null | 150257 | [
"data-mining",
"machine-learning-model",
"prediction",
"algorithms"
] |
121799 | 1 | null | null | 0 | 6 | I tried to implement a logistic regression class using pytorch. The following implementation worked.
```
class LR(torch.nn.Module):
def __init__(self, input_dim, output_dim):
""" Initializes internal Module state. """
super(LR, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.linear = torch.nn.Linear(input_dim, output_dim)
def forward(self, x):
x = x.flatten(start_dim=1)
x = self.linear(x)
outputs = x
return outputs
```
I expected it to work with `x = x.view(-1, self.input_dim)` instead of `x = x.flatten(start_dim=1)`. I thought `torch.nn.Linear(input_dim, output_dim)` only accepts inputs of shape `(any shape, input_dim)`. `x = x.flatten(start_dim=1)` reshapes to (batch_size, flattened_size) with flattened_size being the product of all dimensions of x starting from dimension 1. how would this not raise an error?
| pytorch: implementing logistic regression: input dimension of torch.nn.Linear is input.flatten(start_dim=1) | CC BY-SA 4.0 | null | 2023-05-27T16:48:40.293 | 2023-05-27T16:48:40.293 | null | null | 145386 | [
"machine-learning",
"python",
"logistic-regression",
"pytorch"
] |
121800 | 1 | null | null | 0 | 11 | From [One-hot - Wikipedia](https://en.wikipedia.org/wiki/One-hot#Natural_language_processing):
>
In natural language processing, a one-hot vector is a 1 × N matrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary. The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word.
How is that different to unit vector? I suppose that when referring to unit vectors you are more about pure mathematics, while when referring to one-hot vectors you are more about, well, natural language processing. However if you view them as unit vectors, then is there any interesting application follows up? For example, what space does it form? I guess it's a Hilbert space on ℝᴺ, with N is the number of unique word. But that's all I know.
| Is there any interesting application when viewing one-hot vector as unit vector? | CC BY-SA 4.0 | null | 2023-05-27T18:33:42.977 | 2023-05-27T18:33:42.977 | null | null | 119882 | [
"one-hot-encoding",
"vector-space-models"
] |
121801 | 1 | null | null | 0 | 14 | Let's say we are given a dataset and want to rank them by similarity of distributions. I don't want to use visualization. Is there any sufficient way that you can share with me?
I have an idea like, we can subtract some number of percentiles from features and find the mean of that.
```
def percentile(feature1, feature2, num_percentiles):
percentiles1 = np.percentile(feature1, np.linspace(0, 100, num_percentiles))
percentiles2 = np.percentile(feature2, np.linspace(0, 100, num_percentiles))
difference = percentiles1 - percentiles2
ranking = np.mean(np.abs(difference))
return ranking
```
Can I use that?
| How to rank relatedness of two feature in dataset by their distribution? | CC BY-SA 4.0 | null | 2023-05-27T18:47:20.483 | 2023-05-28T12:21:32.177 | 2023-05-27T18:50:01.263 | 133753 | 133753 | [
"machine-learning",
"data",
"feature-engineering"
] |
121802 | 1 | null | null | 0 | 7 | Understanding the concept of "Gradient Flow" can be quite difficult as there is a lack of widely recognized and clearly defined resources that provide a comprehensive explanation. Although you can find insights from machine learning experts and references to papers that mention gradient flow, there isn't a single, definitive source that thoroughly covers the topic.
Could you please recommend a resource that offers a detailed understanding of gradient flow? Your assistance is greatly appreciated. Thank you
| Exploring the Concept of Gradient Flow | CC BY-SA 4.0 | null | 2023-05-27T19:39:56.303 | 2023-05-27T19:39:56.303 | null | null | 145273 | [
"machine-learning",
"deep-learning",
"neural-network",
"gradient-descent"
] |
121803 | 1 | null | null | 1 | 12 | I have a number of embeddings (300-dimensional FastText vectors for each instance of each class) that I apply a classifier to (Logistic Regression for now). I want to visualize the embeddings as well as the decision boundary as part of model debugging so I can see which classes are not linearly separable, which instances are misclassified etc.
I'm not sure if using PCA or K-PCA is a good idea here. I'm looking for a procedure that will maintain the same structure (if two instances are close in the higher dimension they should still be so in the 2-D one) while making sure that the decision boundary is still correct.
How should I go about this? Thanks.
| How to properly visualize high-dimensional embeddings along with the decision boundary in 2-D? | CC BY-SA 4.0 | null | 2023-05-27T20:07:59.573 | 2023-05-27T20:07:59.573 | null | null | 140766 | [
"machine-learning",
"classification",
"dimensionality-reduction"
] |
121804 | 1 | null | null | 0 | 7 | I do not Unterstand the concept of multiple units in lstm.
If i have an lstm layer with 64 cells, how would be the cells applied to each time step by unrolling.
My understanding is that each time step would be applied by unrolling to all cells.
So If unrolling equals 5, all five time steps would be applied in total to 5*64..
Is this correct?
| How can i understand multiple lstm cells by unrolling? | CC BY-SA 4.0 | null | 2023-05-27T21:30:47.970 | 2023-05-27T21:30:47.970 | null | null | 149613 | [
"machine-learning",
"deep-learning",
"neural-network",
"time-series",
"lstm"
] |
121805 | 1 | null | null | 0 | 11 | Are there any potential issues on performing sentiment analysis using the first 100 words of a very large essay that is of 500 to 700 words. I am having to do this because since most transformer models have a upper limit of 500 words.
| Sentiment Analysis on the first 100 words of a very large essay of 500/700 words | CC BY-SA 4.0 | null | 2023-05-27T21:44:24.207 | 2023-05-28T04:53:15.057 | null | null | 134588 | [
"deep-learning",
"transformer",
"sentiment-analysis"
] |
121806 | 2 | null | 121805 | 1 | null | >
Are there any potential issues on performing sentiment analysis using the first 100 words?
No. You could do it and probably get away with it in certain instances. But if proper/accurate results are your aim, this might give you False results.
Why do you want to select only the first 100 words? As you mention, transformers token limit is 500 words. So why limit yourself to 100?
Also you can use more than 500 by some basic NLP preprocessing techniques. Just remove the stop words, punctuation marks, numbers, URL's and extra whitespaces.
All the above steps would have two fold advantage. It will increase your model accuracy and also decrease the total word length of your text.
| null | CC BY-SA 4.0 | null | 2023-05-28T04:53:15.057 | 2023-05-28T04:53:15.057 | null | null | 119921 | null |
121807 | 1 | null | null | 0 | 10 | I want to get synonyms for a word based on it's use in sentence. for example in the sentence `I will book the hotel`, `book` is synonymous with `reserve` but this is not the case in the sentence `i was reading a book` one way is to check POS of the word in a given sentence but this is not always useful as multiple meanings of a word can share the same POS.
The problem of returning both antonyms and synonyms in word embedding would be solved by checking against a list of verified synonyms for the word.
| Getting synonyms for a word based on context | CC BY-SA 4.0 | null | 2023-05-28T06:35:28.787 | 2023-05-28T06:35:28.787 | null | null | 150279 | [
"word-embeddings"
] |
121808 | 1 | null | null | 0 | 10 | Is there any reference about backpropagation of the Transformer's multi-head layer or multi-head attention (MHA)? I have searched various journals but have not found one yet.
| Is there any reference about backpropagation of the Transformer's multi-head layer? | CC BY-SA 4.0 | null | 2023-05-28T06:41:42.243 | 2023-05-28T06:41:42.243 | null | null | 149431 | [
"transformer",
"backpropagation"
] |
121809 | 1 | null | null | 0 | 23 | Basically I want to know what trends there are on Reddit today. Here are what I've looked at and why they don't work:
- r/popular or sites like Reddit Keyword Research Tool or Subreddit Stats only return individual hot/trendy/popular submissions, while a trend may be and usually be mentioned in many submissions, even many subreddits
- Searching trend analyse or reddit trend in r/redditdev, r/TheoryOfReddit and r/dataisbeautiful doesn't yield satisfied results. Most posts have outdated links or be on very niche topic. The closest thing I get is the post I made a front page word analyzer - see which words are trending on Reddit. : TheoryOfReddit, but it was 10 years ago and the link is dead.
- Reddit Insight, Reddit Unlocked have bugs to get started. Pushshift is dead.
- Reddit Metis is only for user profiles analysis, not trends
Is there any updated, insightful sources on this that you can recommend?
| Is there any pre-built reddit trend analysis tool? | CC BY-SA 4.0 | null | 2023-05-28T07:19:45.873 | 2023-05-31T08:18:42.937 | 2023-05-31T08:18:42.937 | 119882 | 119882 | [
"social-network-analysis"
] |
121810 | 1 | null | null | 0 | 19 | Why we need a solver like bfgs in LogisticRegression unlike LinearRegression? Don't we have a close form like LinearRegression?
| Why we need solver in LogisticRegression? | CC BY-SA 4.0 | null | 2023-05-28T07:30:33.403 | 2023-05-28T12:04:29.540 | null | null | 108053 | [
"linear-regression",
"logistic-regression"
] |
121811 | 2 | null | 121801 | 0 | null | A common solution is the cosine similarity:
[https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html)
But, if your data is linear, you may prefer to use a correlation heatmap, based on the Pearson algorithm:
[https://towardsdatascience.com/seaborn-heatmap-for-visualising-data-correlations-66cbef09c1fe](https://towardsdatascience.com/seaborn-heatmap-for-visualising-data-correlations-66cbef09c1fe)
In both cases, you will want to check if the similarity is correct by taking some samples and verifying their consistency manually.
| null | CC BY-SA 4.0 | null | 2023-05-28T09:14:14.220 | 2023-05-28T09:14:14.220 | null | null | 119140 | null |
121812 | 1 | null | null | 0 | 18 | I have a text classification dataset. The aim is to predict the category of an article based on its title. I have about 100 categories and 10 thousands instances. I've tried models like RNN, LSTM. I've tried pre-trained word embedding models like GloVE. LSTM gave better results than RNN. I also got an improvement with Glove.
As for BERT, I expected to have a better performance, but it was the opposite, as I had the worst performance with this model. I've never implemented BERT before. I've only followed tutorials and books I've read (so I don't know if it's because BERT doesn't perform well in this case or if it's because I haven't implemented it properly). So my problem isn't necessarily a bug in my code. But improving the performance of my model. Is this platform designed to help improve a model's performance (The reason I ask is that it's a bit like asking for a free lunch, asking people to improve their own model.) ? If so, I'd like to share the link to my dataset with the code I've written, including how I implemented the architecture of my BERT model. Thank you for your understanding
| Why type of help can we get on stack Exchange: Datascience? | CC BY-SA 4.0 | null | 2023-05-28T11:12:36.127 | 2023-05-28T11:12:36.127 | null | null | 110309 | [
"nlp",
"bert"
] |
121813 | 2 | null | 121810 | 1 | null | In Linear Regression, you're trying to predict a continuous value, like predicting the price of a house based on its size and location. In this case, we can find a simple equation to calculate the best line that fits the data, and we can solve it mathematically to find the exact answer.
But in Logistic Regression, you're trying to predict a probability, like the chance of someone having a disease based on their age, height, and weight. Instead of a straight line, we use a special function called the logistic function (or sigmoid function) to map the features to probabilities between 0 and 1.
The problem is that there is no simple mathematical equation to find the best parameters (weights) for this logistic function. We need to use an optimization algorithm, like BFGS, to help us find the best parameters.
The BFGS algorithm is like a smart detective that searches for the best parameters by trying different values and improving them step by step. It starts with some initial guesses and checks how well they fit the data. Then, it adjusts the parameters in a way that makes the predictions better. It keeps doing this until it finds the best set of parameters that give the most accurate predictions.
| null | CC BY-SA 4.0 | null | 2023-05-28T12:04:29.540 | 2023-05-28T12:04:29.540 | null | null | 88800 | null |
121814 | 2 | null | 121809 | 1 | null | Consider using third-party social media monitoring tools that offer Reddit tracking capabilities. Tools like Brandwatch, Mention, or Hootsuite may have options to monitor Reddit and identify popular discussions and trends. These tools often provide sentiment analysis, engagement metrics, and other useful features for monitoring social media platforms.
| null | CC BY-SA 4.0 | null | 2023-05-28T12:13:49.170 | 2023-05-28T16:19:25.693 | 2023-05-28T16:19:25.693 | 88800 | 88800 | null |
121815 | 2 | null | 69530 | 0 | null | Regarding your consideration of Long Short-Term Memory (LSTM), it's important to note that LSTM is primarily used for sequential data prediction tasks rather than clustering. LSTM is suitable when you want to predict future trajectories or classify sequences based on their temporal dependencies. You could try some of these too:
- K-Means Clustering: K-Means is a popular clustering algorithm that groups data points into a specified number of clusters. It calculates the distance between data points and cluster centroids to assign them to the nearest cluster.
- DBSCAN: Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is another clustering algorithm that groups data points based on their density. It identifies dense regions separated by sparser areas. DBSCAN can be useful if your data has varying densities or if you want to identify outliers or noise in the trajectories.
- Gaussian Mixture Models (GMM): GMM is a probabilistic model that represents the data distribution as a combination of Gaussian distributions. It can capture complex patterns in your trajectory data and assign data points to different clusters based on the probabilities of belonging to each Gaussian component.
- Hierarchical Clustering: Hierarchical clustering builds a tree-like structure of clusters, also known as a dendrogram. It enables you to identify both individual clusters and nested subclusters within your trajectory data. Agglomerative clustering is a common approach in hierarchical clustering, where each data point starts as its own cluster and is iteratively merged based on distance or similarity measures.
- Self-Organizing Maps (SOM): SOM is an unsupervised learning algorithm that uses a neural network to create a low-dimensional representation of your trajectory data. It can help visualize and cluster the trajectories based on their similarities.
| null | CC BY-SA 4.0 | null | 2023-05-28T12:18:56.563 | 2023-05-28T12:18:56.563 | null | null | 88800 | null |
121816 | 2 | null | 121801 | 0 | null | The approach you proposed using percentiles and calculating the mean difference can be a simple way to rank datasets based on the similarity of their distributions. This method focuses on comparing the distribution characteristics of two features. However, it's important to note that this approach may not capture all aspects of the distributions and may overlook certain nuances.
There are more sophisticated statistical measures that can be used to assess the similarity of distributions. Here are a few alternatives you might consider:
- Kolmogorov-Smirnov Test: The Kolmogorov-Smirnov test is a statistical test that compares the cumulative distribution functions (CDFs) of two datasets. It quantifies the maximum difference between the CDFs, providing a measure of similarity between the distributions.
- Jensen-Shannon Divergence: Jensen-Shannon Divergence is a symmetric measure of the similarity between two probability distributions. It calculates the average of the Kullback-Leibler divergences between the two distributions and their average distribution.
- Earth Mover's Distance (EMD): The Earth Mover's Distance, also known as Wasserstein distance, measures the minimum amount of "work" required to transform one distribution into another. It considers the spatial relationship between data points and can capture both shape and location differences between distributions.
- Bhattacharyya Distance: Bhattacharyya Distance is a statistical measure that quantifies the similarity between two probability distributions. It takes into account both the means and variances of the distributions, providing a metric that reflects their overlap.
| null | CC BY-SA 4.0 | null | 2023-05-28T12:21:32.177 | 2023-05-28T12:21:32.177 | null | null | 88800 | null |
121817 | 1 | null | null | 0 | 29 | I would like to understand the pyTorch RNN module in detail. There I created a very simple and basic example:
```
import torch.nn as nn
# example input data
i_data = torch.arange(1,10).reshape((9,1))
rnn = nn.RNN(1, 1, 1, batch_first=False) # input_size, hidden_size, num_layers
# Predefined weight and bias values to recalculate the RNN's result.
state_dict = rnn.state_dict()
state_dict["weight_ih_l0"] = torch.tensor([[2.0]])
state_dict["weight_hh_l0"] = torch.tensor([[3.0]])
state_dict["bias_ih_l0"] = torch.tensor( [4.0] )
state_dict["bias_hh_l0"] = torch.tensor( [5.0] )
rnn.load_state_dict(state_dict )
```
Now comes the incomprehensible for me ...
If I initialize the hidden_state with the following values
```
hidden_state = torch.zeros(1,1, dtype=torch.float32)
```
and feed the input date to the model
```
y, hidden_state = rnn(i_data, hidden_state)
```
I got the error:
assert (input.dim() in (2, 3)), f"RNN: Expected input to be 2-D or 3-D but received {input.dim()}-D tensor"
AssertionError: RNN: Expected input to be 2-D or 3-D but received 1-D tensor
I tried also many different combinations like
- hidden_state = torch.zeros(1,9, dtype=torch.float32) -> raise RuntimeError(msg.format(expected_hidden_size, list(hx.size())))
RuntimeError: Expected hidden size (1, 1, 1), got [1, 1, 9]
- hidden_state = torch.zeros(1,1,9, dtype=torch.float32) -> raise RuntimeError(
RuntimeError: For unbatched 2-D input, hx should also be 2-D but got 3-D tensor
I'm totally confused. Can someone please help me. Many thanks.
| RNN with PyTorch - I don't understand the initial parameters | CC BY-SA 4.0 | null | 2023-05-28T15:04:30.640 | 2023-06-02T11:06:46.070 | null | null | 150212 | [
"rnn",
"pytorch"
] |
121818 | 1 | null | null | 0 | 14 | So, in the decoder layer of transfomer, suppose I have predicted 3 words till now, including the start token then the last decoder layer will produce 3 vectors of size d-model, and only the last vector will pass through embedding layer to form logits. Am I getting this right? Because its nowhere mentioned in the original paper and I'm having a hard time understanding it. What about the information that gets lost by discarding the two tokens before the last token. We could try to linearly project all the vectors into a single d-dimension vector but then the size of vectors would keep on increasing after we predict new word everytime and we'd need a new projection matrix everytime. This detail seems implicit and isnt mentioned anywhere. Can someone provide me what is actually done and the reason behind this or is this a random heuristic that seems to work (i.e. just take the final hidden state produced by the decoder)
| About the last decoder layer in transformer architecture | CC BY-SA 4.0 | null | 2023-05-28T16:47:14.607 | 2023-05-29T06:33:01.137 | null | null | 150283 | [
"deep-learning",
"neural-network",
"nlp",
"transformer",
"linear-algebra"
] |
121819 | 1 | null | null | 0 | 11 | I solve the problem of recognizing numbers from kaggle and see the following picture: there is a difference of 20% from the highest and lowest scores. So, do we need to do oversampling/undersampling here? I am a beginner and do not understand whether it is worth using ENN, NM1, OSS, SMOTE or something else from popular techniques in such a situation.
Thanks for any help
[https://i.stack.imgur.com/Ndb5q.png](https://i.stack.imgur.com/Ndb5q.png)
| Do we need to make oversampling/undersampling here? | CC BY-SA 4.0 | null | 2023-05-28T17:11:37.360 | 2023-05-28T17:11:37.360 | null | null | 150282 | [
"kaggle",
"mnist"
] |
121820 | 1 | null | null | 0 | 22 | Suppose, I have a regression model of the form
` y= Beta0 + Beta1 x1 + Beta2 * log(x2) + Beta3 * sin(Beta4*x4)`.
This model is both nonlinear in terms of parameter (Beta0,Beta1,Beta2,Beta3,Beta4) and variable (x1,x2,x3,x4).
My question is can I call the coefficient Beta1,Beta2,Beta3 slope coefficients for x1, log(x2) and sin(Beta4*x4)??
Or how can I interpret them?
Also should I use a nonparametric model to modelling this type of model? Or polynomial?
Your kind suggestion is appreciated.
| What is nonlinear regression in terms of both parameter and variable? | CC BY-SA 4.0 | null | 2023-05-29T03:33:00.243 | 2023-05-29T03:33:00.243 | null | null | 150294 | [
"predictive-modeling",
"non-parametric"
] |
121821 | 1 | null | null | 0 | 20 | Suppose say I have a TFT5ConditionalGenration model instantiated with tensor flow.
When you do model.fit, the model will pass the inputs in a feed-forward way similar to a call method. The model will then generate the outputs, which include the logits, past key values, and all. The model will then compare the logits to the labels. My doubt is how does the model know to compare the logits against the labels ?
Because when I normally create a model with functional API, The last layer will be a softmax or its equivalent of what I am going to compare my labels against. But here when I pass the inputs, I am getting multiple things as outputs like logits, past key values, How does the model correctly understand that my labels are to be matched against the logits ?
I thought I needed to extend the model using functional API and make the final layers as model.logits
How does model.fit work without doing any of this ? Kindly clarify this stupid question!
| Extremely silly transformers fine-tuning doubt | CC BY-SA 4.0 | null | 2023-05-29T05:57:14.227 | 2023-05-29T05:57:14.227 | null | null | 148562 | [
"nlp",
"tensorflow",
"transformer",
"machine-translation",
"huggingface"
] |
121822 | 2 | null | 121692 | 0 | null | It means that your training/validation/test data is not representative of the real world.
You should get fresh new data from the real world, and use it instead.
Of course I trust that your training/validation/test directories do not contain duplicates.
| null | CC BY-SA 4.0 | null | 2023-05-29T06:19:01.117 | 2023-05-29T06:19:01.117 | null | null | 18790 | null |
121823 | 2 | null | 121818 | 0 | null | I understand that we are talking about inference time (i.e. decoding), not training.
At each decoding step, all the predicted tokens are passed as input to the decoder, not only the last one. There is no information lost. The hidden states of the tokens that had already been decoded in the previous decoding steps are recomputed; however, non-naive implementations usually cache those hidden steps to avoid recomputing them over and over.
| null | CC BY-SA 4.0 | null | 2023-05-29T06:33:01.137 | 2023-05-29T06:33:01.137 | null | null | 14675 | null |
121824 | 1 | null | null | 0 | 14 | I'm following Stanford's Natural language processing course in Coursera. I'm learning about "Continuous bag of words" model Where neural network with one relu(first layer) and one softmax(second layer) is involved. The gradient of the J with respect to W1 somehow goes like this:
$$\delta J/\delta W_1 = 1/m * Relu(W_2^T(\hat{Y} - Y))X^T$$.
But according to the general formula of backpropagation, isn't that supposed to be:
$$\delta J/\delta W_1 = 1/m*(W_2^T(\hat{Y} - Y)X^T \odot [Z_1 > 0])$$,
where $[Z_1 > 0]$ is just the derivative of the first layer.
My question: How are these two equivalent. I tried some many ways to connect these!
| How does relu appears in first layer gradient of backpropagation? | CC BY-SA 4.0 | null | 2023-05-29T07:07:54.590 | 2023-05-29T07:07:54.590 | null | null | 149261 | [
"nlp",
"backpropagation",
"bag-of-words"
] |
121825 | 2 | null | 72163 | 0 | null | You can create a light model ( like Random Forest ) and read it after print it. Or you can use [EBM](https://interpret.ml/docs/ebm.html) to understand which feature affect your result and in which proportion.
| null | CC BY-SA 4.0 | null | 2023-05-29T08:32:06.127 | 2023-05-29T08:32:06.127 | null | null | 150300 | null |
121826 | 2 | null | 121692 | 0 | null | First of all, your dataset is it balanced ? If not, accuracy is a bad metric to use because it will be strongly influenced by the most represented class. You can use F&-Score which will be better.
Are you sure that your test set is different than your training set ?
Otherwise, like @nicolas-raoul suggest try to be sure that your training / test set are the same.
Good luck with your investigation !
| null | CC BY-SA 4.0 | null | 2023-05-29T08:38:28.600 | 2023-05-29T08:39:12.570 | 2023-05-29T08:39:12.570 | 150300 | 150300 | null |
121827 | 2 | null | 106511 | 0 | null | Have a look at keybert [https://github.com/MaartenGr/KeyBERT](https://github.com/MaartenGr/KeyBERT) which extract keywords
| null | CC BY-SA 4.0 | null | 2023-05-29T09:04:49.757 | 2023-05-29T09:04:49.757 | null | null | 119653 | null |
121828 | 1 | null | null | 0 | 16 | I have force plate data and smartsole data. I want to make a regression model to predict force plate data using smartInsole data. I want to add variations to input and output data and find the relationship between the two using data augmentation. My smartsole have 89 sensor points. The data I have is force plate(15000,1) and SmartInsole (15000,89). What kind of data augmentation is suitable to my case. So I want that the 89 force plate data have a correlation on each force plate data.
my smartinsole data:
[](https://i.stack.imgur.com/IsXPv.png)
my force plate data:
[](https://i.stack.imgur.com/zPaiZ.png)
Dataset:
if you want to help me you can use the following dataset. when you load the force plate data, just select the data in the fx column only
[https://drive.google.com/file/d/1YCH_IQCeHeCPdjsImtFJF8ohLmqqbTIU/view?usp=sharing](https://drive.google.com/file/d/1YCH_IQCeHeCPdjsImtFJF8ohLmqqbTIU/view?usp=sharing)
| what the best data augmentation for the time series data | CC BY-SA 4.0 | null | 2023-05-29T10:06:19.583 | 2023-05-29T10:06:19.583 | null | null | 143059 | [
"data-augmentation"
] |
121829 | 1 | null | null | 0 | 4 | I was wondering if anyone has an opinion on this question.
[https://stats.stackexchange.com/q/617065/388946](https://stats.stackexchange.com/q/617065/388946)
| Use of t_family distribution in glmmTMB | CC BY-SA 4.0 | null | 2023-05-29T10:07:19.413 | 2023-05-29T10:07:19.413 | null | null | 150305 | [
"distribution"
] |
121830 | 2 | null | 30097 | 0 | null | there some advice:
1、use the dropout
2、try to normalization.such as you can `optimizer = optim.SGD(model.parameters(), lr=learning_rate,weight_decay=0.1) `
3、try to change the model
4、increase the number of the data
sometimes the some change you have do the better effect of the ML and DL you will get
| null | CC BY-SA 4.0 | null | 2023-05-29T11:08:35.507 | 2023-05-29T11:08:35.507 | null | null | 150307 | null |
121831 | 1 | null | null | 0 | 11 | I want to correlate 3 NDVI time-series coming from different satellites (Sentinel 2, Landsat 7 and Landsat 8). Date range are the months 4 though 7 from 2017 - now. Regions of interest are 2, KM and LM. Each region has at least 100 fields and each field has their own NDVI time-series.
These time-series are separated into 6 CSV tables - for each of the 3 satellites and each of the 2 regions. Table names should be straightforward:
- korrKomo_l7
- korrKomo_l8
- korrKomo_s2
- korrLanmo_l7
- korrLanmo_l8
- kottLanmo_s2
Fields are "date", "NDVI" and "field_id". This is an overly simplified example of one of my tables for the KM (Komo) region.
```
date NDVI field_id
08/04/2017 0.33 KM1
01/03/2018 NA KM1
27/07/2020 0.60 KM1
08/04/2017 0.4 KM34
01/03/2018 NA KM34
21/07/2020 0.56 KM34
27/07/2020 0.58 KM34
```
Here are my problems:
- Dates for each field_id are almost always repeated (only showing NA when a field was covered by clouds upon image acquisition). However, values for each field in the same date represent more samples and should be kept.
- Although there is data for two different regions, they merely represent extra samples and should be grouped by satellite for analysis.
- Satellites to be correlated acquire images on different dates. A precise correlation by date is not possible.
- Landsat 7 and 8 acquire images in much bigger intervals than Sentinel 2 (ca. every 15 days as opposed to ca. every 5 days).
Here is my endgoal:
- A correlation analysis of the NDVI time-series values of 3 satellites using the combined data of several fields, which often have overlapping acquisition dates and belong to two different regions.
Data analysis is not my field and I don't know how to prepare my data for this correlation. What is holding me back the most are the duplicated dates. I don't know how to make my correlation deal with it.
I appreciate advice on how a processing workflow could look like. I work with R.
| Correlating 3 time-series w/ repeated dates due to subgroups | CC BY-SA 4.0 | null | 2023-05-29T11:30:25.457 | 2023-05-29T11:30:25.457 | null | null | 150308 | [
"regression",
"r",
"dataset",
"data-cleaning",
"correlation"
] |
121832 | 1 | null | null | 0 | 40 | I have a set of data on individuals' performance in 1960,1970,1980 and 1990, e.g. chess rating in those years for a bunch of players with 40-year careers. I've been asked to build a model to predict 1990 performance based on history. So I built a model using all the data except 1990 as input and 1990 data as desired output. Is this leakage given that I'm using 1990 data to predict 1990 data? If I split the data to train/test (with different individuals in train and test), will the test predictions be valid?
| Is this a case of leakage or not? | CC BY-SA 4.0 | null | 2023-05-29T11:46:41.393 | 2023-05-31T06:06:02.173 | 2023-05-29T12:07:12.117 | 54188 | 54188 | [
"data-leakage"
] |
121835 | 2 | null | 36450 | 0 | null | In the "classical" [gradient method](https://databasecamp.de/en/ml/gradient-descent), the gradient is calculated after each batch, which is why it is also called batch gradient descent. A batch is a part of the training data, which in many cases has 32 or 64 training instances. First, the predictions for all instances in the batch are calculated and then the weights are changed by [backpropagation](https://databasecamp.de/en/ml/backpropagation-basics). This can greatly increase the computing power, especially for complex models, for example in image or speech processing. In these applications, the information is additionally relatively sparse, which means that although the data has many attributes, these also often have the value 0.
The [Stochastic Gradient Descent](https://databasecamp.de/en/ml/stochastic-gradient-descent-en), therefore, provides the approach that the gradient is not calculated from a batch, but for each data point. That means in each iteration only one single data point is used. This reduces the used computing power enormously since the remaining batch does not have to be kept in the working memory. This is called [Stochastic Gradient Descent](https://databasecamp.de/en/ml/stochastic-gradient-descent-en), because in each training step the gradient is only an approximation of the actual gradient.
| null | CC BY-SA 4.0 | null | 2023-05-29T16:04:04.917 | 2023-05-29T16:04:04.917 | null | null | 130460 | null |
121836 | 2 | null | 23384 | 0 | null | In the context of [stochastic gradient descent (SGD)](https://databasecamp.de/en/ml/stochastic-gradient-descent-en) in neural networks, the term "stochastic" refers to the randomness introduced during the weight updates. Unlike traditional gradient descent, which computes the gradient using the entire dataset, SGD updates the weights based on a random subset of the training data, known as a mini-batch. This introduces stochasticity or randomness into the optimization process.
The randomness arises from the random selection of mini-batches from the training data. Each mini-batch represents a random sample of the dataset, and the gradients computed on these mini-batches are used to update the weights. By randomly sampling mini-batches, SGD introduces variations in the estimated gradients, which can help the optimization process escape local minima and explore different regions of the weight space.
| null | CC BY-SA 4.0 | null | 2023-05-29T16:07:43.763 | 2023-05-29T16:07:43.763 | null | null | 130460 | null |
121837 | 2 | null | 121797 | 2 | null | I think this is a problem that may be solved using distribution functions.
I. The mathematical model for `Y` is a Bernoulli distribution. The Bernoulli distribution is a probability distribution that describes the outcome of a single trial of an experiment with two possible outcomes, such as a coin flip. In this case, the two possible outcomes are that the professor will order tea or coffee. The probability of the professor ordering tea is denoted by `p`, and the probability of the professor ordering coffee is denoted by `1-p`.
The shop owner `Y` has been observing the professor for one month, and he has observed that the professor orders tea 60% of the time and coffee 40% of the time. This means that `p = 0.6` and `1-p = 0.4`.
II. The following formula gives the solution for the Bernoulli distribution:
```
P(X = x) = p^x (1-p)^(1-x)
```
Where `x` is the number of successes (in this case, the number of times the professor orders tea) and `n` is the total number of trials (in this case, the number of days that the shop owner has been observing the professor).
We want to find the probability that the professor orders tea on any given day. The number of successes is `x = 1`, and the total number of trials is `n = 30` (the number of days a month). Plugging these values into the formula, we get:
```
P(X = 1) = 0.6^1 (1-0.6)^(30-1) = 0.6^1 (0.4)^(29)
```
This means that the probability that the professor orders tea on any given day is `0.6^1 (0.4)^(29) = 0.377`.
III. The algorithm for `Y` to predict what the professor will order is as may be:
- Generate a random number between 0 and 1.
If the random number is less than p, then predict that the professor will order tea.
Otherwise, predict that the professor will order coffee.
For example, if the random number is 0.5, then Y would predict that the professor will order tea, because 0.5 is less than p = 0.6.
The accuracy of this algorithm will depend on the value of p. If p is close to 0 or 1, then the algorithm will be very accurate. However, if p is close to 0.5, then the algorithm will not be as accurate.
In this case, p = 0.6, so the algorithm is expected to be about 60% accurate. This means that for every 100 days that the shop owner uses the algorithm, he will predict the professor's order correctly 60 times.
Hope it helps!
| null | CC BY-SA 4.0 | null | 2023-05-29T16:49:26.020 | 2023-05-29T16:49:26.020 | null | null | 92050 | null |
121838 | 1 | null | null | 0 | 8 | I'm working on a problem for which i want to do some dimensionality reduction using 3 different PCAs of 2 variables each. Basically i want to perform a PCA and keep the first component between the pairs M.max-max.Z; M.min-min.Z; and P-N.
For this im executing the following code:
```
pca.min<-M.scaled%>%
select(M.min, min.Z)%>%
princomp()
pca.max<-M.scaled%>%
select(M.max, max.Z)%>%
princomp()
pca.NP<-M.scaled%>%
select(N, P)%>%
princomp()
```
My problem is that even though these variables are different, have different values between them, and are not highly correlated (except the min.Z-M.min pair which has a 0.74 correlation), when i ask for the PCA loading i get:
```
> pca.min$loadings
Loadings:
Comp.1 Comp.2
M.min 0.707 0.707
min.Z -0.707 0.707
Comp.1 Comp.2
SS loadings 1.0 1.0
Proportion Var 0.5 0.5
Cumulative Var 0.5 1.0
> pca.max$loadings
Loadings:
Comp.1 Comp.2
M.max 0.707 0.707
max.Z 0.707 -0.707
Comp.1 Comp.2
SS loadings 1.0 1.0
Proportion Var 0.5 0.5
Cumulative Var 0.5 1.0
> pca.NP$loadings
Loadings:
Comp.1 Comp.2
N 0.707 0.707
P 0.707 -0.707
Comp.1 Comp.2
SS loadings 1.0 1.0
Proportion Var 0.5 0.5
Cumulative Var 0.5 1.0
```
So they all have the same loading for each variable (0.707). When i do the biplots the proportion of explained variance by each component is different (not 0.5 as the $loadings shows), so my problem is not so much the explained variance by each component but rather the loading of each variable in the components being the same. I don't understant why this is happening. I am definitely not a PCA expert so this might be a rather obvious question to some, but its baffling me. I have tried using different PCA functions and it is the same.
PS. All data is previously centered and standardized.
Here are the scree plots for the 3 PCAs
[](https://i.stack.imgur.com/815To.png)
[](https://i.stack.imgur.com/S8Ks7.png)
[](https://i.stack.imgur.com/m0muW.png)
| Problem with 2 variable PCA loadings- Loadings are the same for all variables | CC BY-SA 4.0 | null | 2023-05-29T18:11:48.823 | 2023-05-29T18:16:59.733 | 2023-05-29T18:16:59.733 | 150315 | 150315 | [
"r",
"pca",
"dimensionality-reduction"
] |
121839 | 1 | null | null | 0 | 11 | I'm looking for a dataset related to environmental monitoring, made up of values obtained from various types of sensors (such as temperature, pressure, CO2...etc) for the purpose of a classification task. However, I'm not sure this is possible with any dataset, so I wanted to ask you if you had any suggestions for a dataset I could use.
Multimodal data (images and sensors) are also accepted and could be very useful.
Please let me know if you have any recommendations or potential sources. I've already looked through Google Datasets and Kaggle.
| I'm looking for a dataset on enviroment monitoring | CC BY-SA 4.0 | null | 2023-05-29T18:14:57.430 | 2023-05-29T18:14:57.430 | null | null | 150316 | [
"deep-learning",
"classification",
"dataset",
"data",
"data-analysis"
] |
121840 | 2 | null | 121832 | 0 | null | If I understood correctly, your problem is formulated as follows:
You have 3 independent variables (60, 70, 80) and you want to use those to predict the value of the 90's variable.
By training your model on all the players, the model essentially has already seen all possible inputs. It would be trivial for a good model to achieve an accuracy close to 100%.
In order to check your model fairly, you would need to keep some of the rows (players) "secret", and then use those as test samples to test your model with unknown data.
If you do this, you will have a much better picture on the performance of your model (with the tradeoff of having less training data). But it kind of is mandatory you do this, otherwise the results of testing on known (for the model) data is pretty much meaningless.
Update:
In case you are already doing train/test split, there still is a way to have leakage. If you make your predictions on the test set, and then use the insights you learned from this to change your model, this is indeed leakage.
There are two ways to combat this.
One way is to fix this by splitting the test set further into validation and test, make all of the procedures and fixes to your model using its validation accuracy as a measure of success and then (once you say "my model is ready") just do the final prediction on the test set to judge the performance on unknown data.
If your data are not that much and it is not very practical to further split it, you can also try doing cross validation on the training set instead.
| null | CC BY-SA 4.0 | null | 2023-05-29T18:51:12.223 | 2023-05-31T06:06:02.173 | 2023-05-31T06:06:02.173 | 79520 | 79520 | null |
121841 | 1 | null | null | 1 | 94 | [Cubes data](https://www.kaggle.com/code/venkateshkulkarni11/classification-using-concept-vectors) is well known data for extreme classification. Each picture has a set of descriptor along with it. Total data set has 312 descriptor. You will find list of descriptior in [this file](https://docs.google.com/document/d/1ZkBV0hfnnxqjymmKiet-Cdh_qJ-9CQ7ruVgUopiFCs8/edit?usp=sharing).
My question is how to find a vector representation for each descriptor so that similarity in vectors reflect the semantic similarity of descriptor.
| How to find a vector representation for each descriptor? | CC BY-SA 4.0 | null | 2023-05-29T21:49:17.030 | 2023-06-01T03:43:53.987 | 2023-05-29T21:54:26.227 | 149761 | 149761 | [
"classification",
"descriptive-statistics",
"semantic-similarity",
"vector-space-models",
"context-vector"
] |
121843 | 2 | null | 121841 | 0 | null | A simple way to get started would be to simply call Word2Vec or Glove to get embeddings for the descriptors, and then refine from there. [https://www.kaggle.com/code/pierremegret/gensim-word2vec-tutorial](https://www.kaggle.com/code/pierremegret/gensim-word2vec-tutorial) is a good starting point that covers training. You can just start with pretrained vectors, something like
```
glove_vectors = gensim.downloader.load('glove-wiki-gigaword-100')
```
Then you'll need to split your descriptors into individual words, getting rid of underscores, "-", etc. Once you have a list of words per descriptor, you can get the average embeddings with code like
```
words = ['upper', 'tail', 'color', 'orange']
mean_vec = glove_vectors.get_mean_vector(words)
```
And then use the mean_vec to determine similarity (e.g. cosine distance) from other embeddings.
| null | CC BY-SA 4.0 | null | 2023-05-30T06:20:58.343 | 2023-05-30T13:58:00.327 | 2023-05-30T13:58:00.327 | 146483 | 146483 | null |
121844 | 1 | null | null | 0 | 36 | I have a list of dates around 10 dates in asc order. These are the dates a buidling was open. I need to predict the next date using this. I tried scikit learn like below
```
import datetime
from sklearn.linear_model import LinearRegression
dates = available_dates
# Convert dates to datetime.datetime objects
datetime_dates = [datetime.datetime.combine(date, datetime.datetime.min.time()) for date in dates]
# Create X and y for regression
X = [[date.timestamp()] for date in datetime_dates]
y = list(range(1, len(dates) + 1))
# Create a linear regression model
regression_model = LinearRegression()
# Fit the model to the data
regression_model.fit(X, y)
# Predict the next date when the house will become available
next_date_timestamp = regression_model.predict([[datetime_dates[-1].timestamp() + 24 * 60 * 60]])
next_date = datetime.datetime.fromtimestamp(next_date_timestamp[0])
print("Next available date:", next_date.date())
```
Next date always comes as 1969-12-31
Is this because the dataset is too small?
Should I use something else? Or I can fix this somehow?
I am a complete noob to ML stuff and I have an urgent deadline :(. Pls help me with what tech to use or what to Google. Any help is appreciated
| How to predicting the next date with Python ML | CC BY-SA 4.0 | null | 2023-05-30T07:44:45.063 | 2023-05-30T14:39:24.650 | null | null | 150326 | [
"scikit-learn",
"time-series",
"prediction"
] |
121845 | 2 | null | 120461 | 0 | null | I think while stacking and then calibrating on top may be a methodologically sound approach, the added computational and architectural complexities would be huge.
Plus, to reduce the risk of data leakage and overfitting, you have to ensure that the data used to train the base models are not the same data used to train the meta-learner, which is in turn not the same data used for calibration. Partitioning the data this many times may be problematic depending on your problem and data sample. In practice, we may get away with being less rigorous here but the results will come back eventually if the model is going to be deployed commercially.
I might suggest going with base learners and calibration and only going with stacking if absolutely necessary. In my experience, end users might sometimes care more about interpretability (which calibration helps provide in terms of reliable estimates of the probabilities) than pure prediction accuracy.
In terms of computation, I have had some success scaling by using Rapids' cuml base estimators with Sklearn's calibration utilities. You may be interested in checking them out [here](https://docs.rapids.ai/api/cuml/stable/).
| null | CC BY-SA 4.0 | null | 2023-05-30T09:39:45.950 | 2023-05-30T09:41:57.870 | 2023-05-30T09:41:57.870 | 144052 | 144052 | null |
121846 | 1 | null | null | 0 | 21 | I want to reproduce the results in ["Online Neural Networks for Change-Point Detection" Hushchyn et al.](https://arxiv.org/pdf/2010.01388.pdf), but I'm having trouble implementing their loss function with Keras. The algorithm works on sequential data and computes the cross entropy between two segments of a time series separated by $l$ time steps, $X(t)$ and $X(t-l)$, according to eq. 10 in their paper:
$L(X(t-l), X(t)) = -\log(1-f(X(t-l),\theta)) - \log(f(X(t), \theta))$
where $f(X, \theta)$ is the neural network we want to train and $\theta$ are its parameters. I don't know how to implement this loss function in Keras, as the loss functions I'm used to working with compute the loss separately on each instance in the same way (i.e. MSE). Any suggestions?
| How to implement a custom loss function acting differently on multiple instances with keras? | CC BY-SA 4.0 | null | 2023-05-30T10:12:23.517 | 2023-05-30T10:12:23.517 | null | null | 150328 | [
"keras",
"time-series",
"loss-function"
] |
121847 | 2 | null | 115711 | 2 | null | For my projects I use [NLP Lab](https://www.johnsnowlabs.com/nlp-lab/), by John Snow Labs. It provides automated annotation and model training, saves time compared to other tools, and is completely free of charge. Another impressive thing about it (which I found) is that you don't need to have a prior experience in coding, because there is no coding involved. You can even invite your team members and collaborate with them in your project.
The documentation can be a long read, I have provided a direct link here: [docs](https://nlp.johnsnowlabs.com/docs/en/alab/quickstart)
| null | CC BY-SA 4.0 | null | 2023-05-30T11:30:40.430 | 2023-05-30T11:30:40.430 | null | null | 150329 | null |
121848 | 1 | null | null | 0 | 22 | I am a PhD student in data science, basically I design a model for a Vision / Language task. The dataset and the state of the arts models are public. It has been 2 years that I trained myself to use Docker for my dev environments / experiments. I am wondering if it's not overkill? Most of my colleagues use Conda and they seem to be fine with it.
TL;DR: What are potential issues with using Conda to set up my development environment as an academic data scientist?
## Context
Suppose you're designing or fine-tuning deep learning models. Essentially, you're a data scientist. In this scenario, you're operating in an academic environment, which means there's no need to send your models into production for a client or a demo. Your goal is to explore, train, and test your model on a given public database for your research paper. After conducting your experiments, you'll probably maintain a public repository to share your code and facilitate publication. Additionally, you have access to a remote cluster for computations, as you may not have 4 A100s in your local machine.
Note: The research process isn't quite as straightforward as this, as it's iterative and can be chaotic. I've just listed the main steps for simplicity.
## Question
What could be the potential issues with using Conda to set up a deep learning environment?
## My Attempted Answer
Here's my perspective, though I'm unsure of its accuracy. I don't use Conda myself; instead, I use Docker, and I'm wondering if that might be overkill.
Conda can construct a comprehensive development environment, encapsulated in a configuration file `environment.yml`. This file can store your Python packages, helper programs, and libraries with their versions. However, with Conda, you don't explicitly specify your system's requirements; instead, you download Conda packages. Thus, for tools like Git, the CUDA Toolkit, and others, you would need to find corresponding Conda packages.
Now let's turn our attention to the specifics of a deep learning environment, particularly CUDA libraries. Thanks to these libraries, we can leverage our GPU for model computations. According to the [CUDA documentation](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html), you'll notice extensive tables outlining the dependencies required based on your host OS. Therefore, with Conda, you cannot have a single configuration file that is host-independent, as the Conda packages you'll need to install will differ based on the host's OS.
A possible solution could be using Docker or any other container solution. A container shares the kernel with the host machine and can encompass any distribution of an OS (in this case, a Linux OS). This means you could use the same container across different machines, as the container includes an OS layer that Conda lacks.
I would welcome any corrections or additions to this post. I'm relatively inexperienced with these matters, so I may have overlooked something. Ultimately, I'm seeking a definitive answer to my question. I'm not certain if my attempted answer provides a part of the solution, or if it's entirely incorrect. Any insights would be greatly appreciated.
Note: I read this [exhaustive](https://datascience.stackexchange.com/a/57737/150330) answer. The problem is that it does not fit my case. I do not have to many different clusters, production pipelines or any other complicated stuff. I "just" need to develop, train remotely, experiment and share a reproducible model.
| Analyzing the Suitability of Conda for Academic Deep Learning Projects | CC BY-SA 4.0 | null | 2023-05-30T13:00:13.113 | 2023-05-30T13:00:13.113 | null | null | 150330 | [
"anaconda"
] |
121849 | 2 | null | 121844 | 0 | null | You need to assign the value of 1 to the time that the house is available and the value of -1 to the time that's not available. This would be your "y". Your X should be the timestamps but use the pandas to_datetime (or something similar, not sure off the top of my head). Now you can give new timestamps (future) to predict whether the house is available (1) or not (-1).
| null | CC BY-SA 4.0 | null | 2023-05-30T13:21:29.810 | 2023-05-30T13:21:29.810 | null | null | 119938 | null |
121850 | 1 | null | null | 0 | 31 | I applied the best subset selection regression model in R from `leaps` package to my `dat` dataframe. `x` and `y` are factors with 3, and 4 levels, respectively. The result of the summary of the model is like below image.
```
library(leaps)
set.seed(123)
dat <- data.frame(
x = as.factor(sample(c(1,2,3), size = 10, TRUE)),
y = as.factor(sample(c(4,5,6,7), size = 10, TRUE)),
z = rnorm(10),
res = rnorm(10)
)
model.full <- regsubsets(res ~ ., data = dat, nvmax = 3)
summary(model.full)
```
summary result:
[](https://i.stack.imgur.com/3uXwH.jpg)
I would like to have clear picture of what is going on in the results. Does anyone could help me interpret this result? It is clear that levels 2 and 3 of `x` variable, while not selected for best 1-varibale and 2-varibale models, have been chosen for best 3-variable model. Also, from the model coefficients I can say:
[](https://i.stack.imgur.com/9oatU.jpg)
- Intercept 2.32 is the average value of the res for level 1 of x variable.
- X2 coefficient -2.41 can be interpreted as the difference in the average res between x level 2 and x level 1.
- X3 coefficient -2.92 can be interpreted as the difference in the average res between x level 3 and x level 1.
Does this interpretation correct?
It is just a hypothetical situation. My actual problem is that I have many factor variables with multiple levels, and I would like to apply best subset selection method. I just wanted to make sure I interpret the result correctly.
| Interpretation of best subset selection regression model for factor variables with more than 2 levels | CC BY-SA 4.0 | null | 2023-05-30T13:53:22.697 | 2023-06-01T22:51:44.973 | null | null | 127461 | [
"machine-learning",
"regression",
"r",
"categorical-data"
] |
121851 | 1 | null | null | 0 | 14 | I want to detect attributes of objects in an image - like what is color of a patch on shirt of person, how many patches are there, type of objects, exact dimensions of the objects etc
I've heard of [Blip2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) but I'm not sure if this will do what I need above. Can someone suggest if Blip2 is right model or some other model is better for such metadata detection?
| Blip2 for image metada | CC BY-SA 4.0 | null | 2023-05-30T14:00:46.903 | 2023-06-01T12:21:08.323 | 2023-06-01T12:21:08.323 | 135707 | 60807 | [
"computer-vision"
] |
121852 | 2 | null | 121844 | 1 | null | So LinearRegression is essentially just trying to predict a line, y = mx + b. In your case, you’re feeding it y = [0,1,2,3,4,5,6,7,8,9], so once you’ve fit your model, when you call predict it’s going to return a very small number somewhere between 0 and 9 most likely. Then you feed this into timestamp and add 24 * 60 * 60, and interpret this as a date. Python (and other languages) interpret timestamp 0 as January 1st, 1970, so the timestamp you get back from LinearRegression is going to be somewhere in this neighborhood.
It's entirely possible to use LinearRegression if you can formulate your problem as a line, i.e. y=mx+b, but I’d suggest an easier approach to get started. Instead of using LinearRegression, you can find the average differences between your last two times, add that to your last time, and use that as a first order prediction. A second order prediction would use the average of your last two differences, and so on.
This would look something like the following:
```
dates = []
dates.append(datetime.datetime(2004, 2, 4))
dates.append(datetime.datetime(2005, 3, 5))
dates.append(datetime.datetime(2006, 4, 6))
diff = dates[2].timestamp()- dates[1].timestamp()
dates[2].timestamp() + diff
datetime.datetime.fromtimestamp(dates[2].timestamp() + diff, tz=None)
```
will return datetime.datetime(2007, 5, 7, 23, 0).
Note that this is a very simplistic approach, there are a ton of more sophisticated ways to approach time series data (google SARIMAX for example), but should give you enough to get started. hth.
| null | CC BY-SA 4.0 | null | 2023-05-30T14:39:24.650 | 2023-05-30T14:39:24.650 | null | null | 146483 | null |
121853 | 2 | null | 121817 | 0 | null | Running your code as is, I get an error about i_data being Long instead of Float. RuntimeError: expected scalar type Float but found Long. However, changing just the one line to
```
i_data = torch.arange(1,10).float().reshape((9,1))
```
and I get outputs y and hidden_state returned correctly.
```
hidden_state = torch.zeros(1,1, dtype=torch.float32)
y, hidden_state = rnn(i_data, hidden_state)
```
The parameters to the constructor for RNN are input_size, hidden_size, num_layers, and the arguments you pass to rnn are input, hidden_0. Everything seems correct at first glance to me. Do you maybe have variables still initialized from previous runs or is this working with the one-line change for you?
| null | CC BY-SA 4.0 | null | 2023-05-30T15:13:24.537 | 2023-05-30T15:13:24.537 | null | null | 146483 | null |
121854 | 1 | 121855 | null | 0 | 22 | I am new to ML and trying to solve problem of text segmentation.
I have a transcript of news show and I want to split this transcript into parts by topic. I tried to google and asked chatgpt and found a lot of info, but I don't understand how to properly run this task.
It looks like a classic problem and I cant find proper naming for it.
I am looking for help to find proper names for this problem, and, how to approach it with existing tools.
My initial thought was to use word embeddings -> sentence vectors with rolling average to detect changes in topics, but this approach does not work. What are other ways to solve this problem?
| Text segmentation problem | CC BY-SA 4.0 | null | 2023-05-30T16:17:28.100 | 2023-05-30T17:18:06.247 | null | null | 150337 | [
"nlp",
"scikit-learn",
"word-embeddings",
"text",
"gensim"
] |
121855 | 2 | null | 121854 | 0 | null | The problem you are describing is not a classic NLP problem.
There is a similar classic NLP problem called "topic modelling", which consists of discovering topics in a collection of text documents. Topics are defined by a list of words relevant to the topic itself. The most paradigmatic approach to this problem may be Latent Dirichlet Allocation (LDA). It is an unsupervised learning approach.
Your problem, nevertheless, has somewhat also been approached from a machine learning perspective, at least partially. I can refer you to the article [Unsupervised Topic Segmentation of Meetings with BERT Embeddings](https://arxiv.org/pdf/2106.12978.pdf) by Meta. This is its abstract:
>
Topic segmentation of meetings is the task of dividing multi-person meeting transcripts into topic blocks. Supervised approaches to the problem have proven intractable due to the difficulties in collecting and accurately annotating large datasets. In this paper we show how previous unsupervised topic segmentation methods can be improved using pre-trained neural architectures. We introduce an unsupervised approach based on BERT embeddings that achieves a 15.5% reduction in error rate over existing unsupervised approaches applied to two popular datasets for meeting transcripts.
The authors released their source code at [github](https://github.com/gdamaskinos/unsupervised_topic_segmentation).
To understand its contents, you will need to have some background on [BERT](https://huggingface.co/blog/bert-101), an NLP neural network based on the [Transformer](https://arxiv.org/abs/1706.03762) architecture's encoder part. On [https://datascience.stackexchange.com/](https://datascience.stackexchange.com/) you can find plenty of specific questions and answers about it (and you can ask more if you don't find your specific doubts).
| null | CC BY-SA 4.0 | null | 2023-05-30T17:18:06.247 | 2023-05-30T17:18:06.247 | null | null | 14675 | null |
121856 | 1 | null | null | -2 | 41 | How to solve this Error?---ValueError: Invalid classes inferred from unique values of `y`. Expected: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13], got ['Afghanistan' 'Australia' 'Bangladesh' 'England' 'India' 'Ireland'
'New Zealand' 'Pakistan' 'South Africa' 'Sri Lanka' 'West Indies'
'Zimbabwe' 'no result' 'tied']
I got above error while using below code
final = pd.get_dummies(df_teams_2015, prefix=['Team_1', 'Team_2'], columns=['Team_1', 'Team_2'])
# Separate X and y sets
X = final.drop(['Winner'], axis=1)
y = final["Winner"]
# Separate train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
rf = XGBRFClassifier(n_estimators=100, subsample=0.9, colsample_bynode=0.2)
rf.fit(X_train, y_train)
# report performance
print('Mean Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
I used Other other models such as decision trees, Random forests etc. other models working perfectly but in XGboost Im facing this problem
| ValueError: in python | CC BY-SA 4.0 | null | 2023-05-30T18:07:24.847 | 2023-05-31T14:34:58.147 | 2023-05-31T09:25:48.320 | 146499 | 146499 | [
"python"
] |
121857 | 1 | null | null | 0 | 19 | As an exercise, I'm building a network for binary classification of sequences (whether a sequence belongs to type A or type B). The network consists of an RNN with one LSTM layer, and on top of it an MLP that outputs the classification. I input batches of sequences with different lengths into the network, which means I need to pad the sequences to make them equal in length, and to mask the outputs of the network to make them the same length as the original sequences.
What is the correct way to implement padding/masking in PyTorch? I have read about functions like `pad_sequence()`, `pack_sequence()`, `pack_padded_sequence()`, etc., but I have already become confused with all these functions... Or is there any other "secret" way that I don't know of?
| How to Implement padding and masking sequences for RNN | CC BY-SA 4.0 | null | 2023-05-30T18:28:25.463 | 2023-05-31T16:01:14.573 | 2023-05-31T13:23:29.680 | 117780 | 117780 | [
"python",
"rnn",
"pytorch"
] |
121858 | 2 | null | 80663 | 0 | null | You have mentioned X_train shape is (1400, 64, 35), So we can create a LSTM model whose input size will be (64,35) And you can take the number of units in LSTM as per your choice.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, GlobalAveragePooling1D, Dense
# Define the model
model = Sequential()
# Add the LSTM layer
model.add(LSTM(units=64, input_shape=(64, 35), return_sequences=True))
# Add the GlobalAveragePooling layer
model.add(GlobalAveragePooling1D())
# Add the Dense layer
model.add(Dense(units=10, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Print the model summary
model. Summary()
```
| null | CC BY-SA 4.0 | null | 2023-05-30T19:59:14.590 | 2023-05-30T19:59:14.590 | null | null | 150344 | null |
121859 | 1 | null | null | 0 | 27 | I'm training a neural network on the results of a CFD simulation (or rather, 300-ish simulations with different initial conditions). The dataset contains the values for temperature, density, velocity, etc. at equidistant points on a known line segment (along a single spatial dimension) at each tick of the simulation. I also have some "metadata" i.e. some simulation parameters in each instance of the simulation. (like the initial conditions, the environmental conditions, etc.).
From what I understand, this is similar to image data (except it's 1D and we have physical quantities instead of RGB values); perhaps, I should melt the dataset and even use time as a second "dimension" rather than a column/feature. And from what I understand, a convolutional architecture might be the best option for such spatial/spatiotemporal data. I'll provide a simplified example of my dataframe at the bottom of this post.
In either case, my dataframe will have some non-spatial columns which are not to be used in the convolution (I'd probably use some fully connected Dense neurons for them), and others which are to be used (I'd probably use some Conv1D or Conv2D neurons for them). How do I build this kind of architecture on TensorFlow? I don't think the Sequential API can have parallel branches with different behaviours, right?
A simplified example:
```
--------------------------------------------------------------------------
| Time | Initial conditions | T1 | T2 | T3 | V1 | V2 | V3 | D1 | D2 | D3 |
|------|--------------------|----|----|----|----|----|----|----|----|----|
| 1 | 100 | 20 | 21 | 22 | 3 | 5 | 6 | 10 | 10 | 11 |
| 2 | 100 | 19 | 21 | 21 | 4 | 6 | 7 | 10 | 9 | 10 |
| ... | 100 | ... |
```
`x1` is the value of the quantity `x` at a known point `1`. Imagine hundreds of such CSV files each having different initial conditions (these are also numerical, not categorical). I can load them and join them melt columns into rows or vice-versa, no problem, I know the syntax for that. But TL;DR my 2 questions are:
- Should I melt the time column into separate columns for each timestep (assuming I have sufficient computational power to handle such a wide dataset)?
- How do I convolve the temperature columns, and the velocity columns, and the density columns (and perhaps the timestep columns), while leaving the initial conditions as independent inputs?
Sorry. Kinda new to neural network coding, especially convolutional ones. If there are any helpful resources for a newbie, please send them my way! I've only worked with Dense layers and Sequential architectures before. And no heterogeneous layers.
| How do I implement convolution partially on my dataset? | CC BY-SA 4.0 | null | 2023-05-30T23:52:46.013 | 2023-06-03T10:12:12.507 | null | null | 136334 | [
"python",
"keras",
"tensorflow",
"convolutional-neural-network",
"simulation"
] |
121861 | 1 | null | null | 0 | 23 | first of all, my knowledge on this subject is very limited, therefore it may be a silly question.
I have 3 vectors, two of them are (distance vectors) in the range of 0 and 1, and the other is (Pearson Correlation Coefficients) between -1 and 1. I want to look at the correlation between them in Matlab, but I'm confused is it necessary to normalize the correlation coefficient?
If so I read that the correlation coefficient is not directly averaged. Does the same process apply when normalizing?
Thanks for the help.
| How can I correlate different types of variables? | CC BY-SA 4.0 | null | 2023-05-31T09:19:48.033 | 2023-05-31T09:19:48.033 | null | null | 150355 | [
"correlation"
] |
121862 | 1 | null | null | 0 | 16 | I want to compare the performance of two different ML models like M1 and M2. I have a very huge data set and having two different downsampling of this data set, call them S1 and S2. Can I compare the performance of M1 on S1 and M2 on S2? (Suppose that M1 and M2 are large enough)
| Comparing Models based on two different sample sets of a single data set | CC BY-SA 4.0 | null | 2023-05-31T09:29:26.930 | 2023-05-31T09:29:26.930 | null | null | 91998 | [
"model-evaluations",
"sampling"
] |
121863 | 1 | null | null | 0 | 20 | I have trained a gradient boosting model on historical data to predict whether person registers a business or not (binary classification problem). Right now the model is on the stage of online A/B-test. During this test we have collected some real examples when the model hits the right person or misses. There is incentive to add these examples into the training set and retrain the model to improve its performance (like in RL). The following difficulty arises:
- it will be great to calculate classification quality metrics (AUC-ROC, AUC-PR, etc.) for refined model before the new online test
- data is split time-wise to prevent data leak from future periods
- there is a small number of new positive examples from test (about 200) and hundreds of thousands of negative examples
- classification metrics calculated on small number of positive examples tend to be unstable, so it will be required to add all new examples to testing set for proper calculation of offline classification metrics as online test examples are most actual ones
- thus, it will be impossible to test this new approach with model refinement on hits and misses offline as the training set won't include the new information. I will only be able to check the performance of the model during new online test where the model will be trained on all available data.
My question is the following: is the described procedure the only possible option in this situation? Does it make sense to calculate classification metrics on testing sample with small number of positive examples (i.e. take 100 positive examples from ongoing A/B test to check the model performance and add other 100 to extended training set)? Is there any other way to check the proposed approach or I should try some other modeling technique?
| Retraining gradient boosting classifier on its hits and misses | CC BY-SA 4.0 | null | 2023-05-31T09:59:49.837 | 2023-05-31T09:59:49.837 | null | null | 150356 | [
"classification",
"class-imbalance",
"boosting"
] |
121865 | 1 | null | null | 0 | 10 | I have an RL problem, where the number of actions depends on the state. Furthermore, each action-value computation requires action information in the form of a high-dimensional, continuous vector in addition to the state. It is not feasible to input all of these contextual vectors into the Q-network at once (i.e. embed them as part of the state), and emit q-values for the maximally possible number of actions, mainly due to the strongly fluctuating amounts of available actions per state and the dimensionality of the contextual vectors.
For regular DQN, I have solved this by inputing each contextual vector along with the state into the Q-network one-by-one. The Q-network emits just a single value, the q-value. This works fine and performs well. However, I am stuck on using the same approach for Dueling DQN. I have managed to implement a working solution, but it performs much worse than DQN.
My dueling architecture emits the state value $v$ and the advantage $a$, given the state and contextual vector as input. I then use the target network (without gradient calculation) to do the same for all other actions/contextual vectors. Using the obtained state values and advantage values, I compute the average of both, and subtract both from the sum $v + a$. The final q-value is thus $q = v + a - a_{mean} - v_{mean}$. Clearly, there is a difference to the vanilla dueling architecture, because I have no way of computing a pure state value, since I must input the contextual vector as well.
Does anyone have experience with such a scenario? I have yet to find any literature or information on this topic.
| Dueling DQN with varying number of actions | CC BY-SA 4.0 | null | 2023-05-31T12:32:24.350 | 2023-05-31T12:32:24.350 | null | null | 148495 | [
"machine-learning",
"deep-learning",
"reinforcement-learning",
"q-learning",
"dqn"
] |
121866 | 1 | null | null | 0 | 35 |
### Objective
My goal is to fine-tune a pre-trained LLM on a dataset about Manchester United's (MU's) 2021/22 season (they had a poor season). I want to be able to prompt the fine-tuned model with questions such as "How can MU improve?", or "What are MU's biggest weaknesses?". The ideal responses would be insightful/logical and +100 words
### Data
- I will simply use text from the relevant wiki page as my data: https://en.wikipedia.org/wiki/2021%E2%80%9322_Manchester_United_F.C._season
- How should I structure my data? Should it be a list dictionaries where the keys are the questions and the values are the answers (i.e. a list of question-answer pairs), or a long string containing all the text data (for context), or a combination of both?
### Notes
- I have mainly been experimenting with variations of Google's T5 (e.g.: https://huggingface.co/t5-base) which I have imported from the Hugging Face Transformers library
- So far I have only fine-tuned the model on a list of 30 dictionaries (question-answer pairs), e.g.: {"question": "How could Manchester United improve their consistency in the Premier League next season?", "answer": " To improve consistency, Manchester United could focus on strengthening their squad depth to cope with injuries and fatigue throughout the season. Tactical adjustments could also be explored to deal with teams of different strengths and styles."}
- Use of this small dataset (list of 30 dictionaries) has given poor results
### Further Questions and Notes
- Other than increasing the size of my dataset, is my approach sound?
- What would you recommend as a minimum number of dictionaries to train/fine-tune the model on?
- I am also aware that I can tune the hyperparameters to improve performance, but for now I am more concerned about my general approach being logical
| Fine-tuning a pre-trained LLM for question-answering | CC BY-SA 4.0 | null | 2023-05-31T12:56:54.683 | 2023-05-31T12:56:54.683 | null | null | 139067 | [
"transformer",
"language-model",
"huggingface",
"text-generation",
"finetuning"
] |