ananthakrishnan commited on
Commit
02b8f3f
1 Parent(s): e1a89b3

tech: build LSTM model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
.gitignore CHANGED
@@ -1 +1,4 @@
1
- transactify_venv
 
 
 
 
1
+ transactify_venv
2
+ tokenizer.joblib
3
+ label_encoder.joblib
4
+ transactify.h5
About.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Abstract for Transactify......
2
+
3
+ Transactify is an LSTM-based model designed to predict the category of online payment transactions from their descriptions.
4
+ By analyzing textual inputs like "Live concert stream on YouTube" or "Coffee at Starbucks," it classifies transactions into categories such as "Movies & Entertainment" or "Food & Dining."
5
+ This model helps users track and organize their spending across various sectors, providing better financial insights and budgeting.
6
+ Transactify is trained on real-world transaction data for improved accuracy and generalization.
7
+
8
+ Table of contents....
9
+
10
+ 1.Data Collection:
11
+ The dataset consists of 5,000 transaction records generated using ChatGPT, each containing a transaction description and its corresponding category.
12
+ Example entries include descriptions like "Live concert stream on YouTube" (Movies & Entertainment) and "Coffee at Starbucks" (Food & Dining).
13
+ These records cover various spending categories such as Lifestyle, Movies & Entertainment, Food & Dining, and others.
14
+
15
+
16
+ 2.Data Preprocessing:
17
+ The preprocessing step involves several natural language processing (NLP) tasks to clean and prepare the text data for model training.
18
+ These include:
19
+ Lowercasing all text.
20
+ Removing digits and punctuation using regular expressions (regex).
21
+ Tokenizing the cleaned text to convert it into a sequence of tokens.
22
+ Applying text_to_sequences to transform the tokenized words into numerical sequences.
23
+ Using pad_sequences to ensure all sequences have the same length for input into the LSTM model.
24
+ Label encoding the target categories to convert them into numerical labels.
25
+ After preprocessing, the data is split into training and testing sets to build and validate the model.
26
+
27
+
28
+
29
+ 3.Model Building:
30
+ Embedding Layer: Converts tokenized transaction descriptions into dense vectors, capturing word semantics and relationships.
31
+
32
+ LSTM Layer: Learns sequential patterns from the embedded text, helping the model understand the context and relationships between words over time.
33
+
34
+ Dropout Layer: Introduces regularization by randomly turning off neurons during training, reducing overfitting and improving the model's generalization.
35
+
36
+ Dense Layer with Softmax Activation: Outputs a probability distribution across categories, allowing the model to predict the correct category for each transaction description.
37
+
38
+ Model Compilation: Compiled with the Adam optimizer for efficient learning, sparse categorical cross-entropy loss for multi-class classification, and accuracy as the evaluation metric.
39
+
40
+ Model Training: The model is trained for 50 epochs with a batch size of 8, using a validation set to monitor performance and adjust during training.
41
+
42
+ Saving the Model and Preprocessing Objects:
43
+
44
+ The trained model is saved as transactify.h5 for future use.
45
+ The tokenizer and label encoder used during preprocessing are saved using joblib as tokenizer.joblib and label_encoder.joblib, respectively,
46
+ ensuring they can be reused for consistent tokenization and label encoding when making predictions on new data.
47
+
48
+
49
+
50
+ 4.Prediction:
51
+ Once trained, the model is used to predict the category of new transaction descriptions.
52
+ The output provides the category label, enabling users to classify their spending based on transaction descriptions.
53
+
54
+
55
+
56
+ 5.Conclusion:
57
+ The Transactify model effectively categorizes transaction descriptions using LSTM networks.
58
+ However, to improve the accuracy and reliability of predictions, a larger and more diverse dataset is necessary.
59
+ Expanding the dataset will help the model generalize better across various spending behaviors and conditions.
60
+ This enhancement will lead to more precise predictions, enabling users to gain deeper insights into their spending patterns.
61
+ Future work should focus on collecting additional data to refine the model's performance and applicability in real-world scenarios.
62
+
63
+
64
+ ![Excepted Output:](result.gif)
Dataset/transaction_data.csv CHANGED
@@ -4998,4 +4998,4 @@ Google Play Music,Online Payment
4998
  Yoga class at HealthFit Studio,Lifestyle
4999
  Doctor's appointment payment,Health & Wellness
5000
  New sneakers from Nike,Lifestyle
5001
- Breakfast at Denny's,Food & Dining
 
4998
  Yoga class at HealthFit Studio,Lifestyle
4999
  Doctor's appointment payment,Health & Wellness
5000
  New sneakers from Nike,Lifestyle
5001
+ Breakfast at Denny's,Food & Dining
LSTM_model.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LSTM_model.py
2
+ import numpy as np
3
+ from tensorflow.keras.models import Sequential
4
+ from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout
5
+ from data_preprocessing import preprocess_data, split_data
6
+ import joblib # To save the tokenizer and label encoder
7
+
8
+ # Define the LSTM model
9
+ def build_lstm_model(vocab_size, embedding_dim=64, max_len=10, lstm_units=128, dropout_rate=0.2, output_units=6):
10
+ model = Sequential()
11
+ model.add(Embedding(input_dim=vocab_size, output_dim=embedding_dim, input_length=max_len))
12
+ model.add(LSTM(units=lstm_units, return_sequences=False))
13
+ model.add(Dropout(dropout_rate))
14
+ model.add(Dense(units=output_units, activation='softmax'))
15
+
16
+ model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
17
+
18
+ return model
19
+
20
+ # Main function to execute the training process
21
+ def main():
22
+ # Path to your data file
23
+ data_path = r"E:\transactify\transactify\Dataset\transaction_data.csv"
24
+
25
+ # Preprocess the data
26
+ sequences, labels, tokenizer, label_encoder = preprocess_data(data_path)
27
+
28
+ # Check if preprocessing succeeded
29
+ if sequences is not None:
30
+ print("Data preprocessing successful!")
31
+
32
+ # Split the data into training and testing sets
33
+ X_train, X_test, y_train, y_test = split_data(sequences, labels)
34
+ print(f"Training data shape: {X_train.shape}, Training labels shape: {y_train.shape}")
35
+ print(f"Testing data shape: {X_test.shape}, Testing labels shape: {y_test.shape}")
36
+
37
+ # Build the LSTM model
38
+ vocab_size = tokenizer.num_words + 1 # +1 for padding token
39
+ model = build_lstm_model(vocab_size, max_len=10, output_units=len(label_encoder.classes_))
40
+
41
+ # Train the model
42
+ model.fit(X_train, y_train, epochs=50, batch_size=8, validation_data=(X_test, y_test))
43
+
44
+ # Evaluate the model
45
+ loss, accuracy = model.evaluate(X_test, y_test)
46
+ print(f"Test Loss: {loss:.4f}, Test Accuracy: {accuracy:.4f}")
47
+
48
+ # Save the model
49
+ model.save('transactify.h5')
50
+ print("Model saved as 'transactify.h5'")
51
+
52
+ # Save the tokenizer and label encoder
53
+ joblib.dump(tokenizer, 'tokenizer.joblib')
54
+ joblib.dump(label_encoder, 'label_encoder.joblib')
55
+ print("Tokenizer and LabelEncoder saved as 'tokenizer.joblib' and 'label_encoder.joblib'")
56
+
57
+ else:
58
+ print("Data preprocessing failed.")
59
+
60
+ # Execute the main function
61
+ if __name__ == "__main__":
62
+ main()
bert_model.py DELETED
@@ -1,134 +0,0 @@
1
- # Import Required Libraries
2
- import torch
3
- import torch.nn as nn
4
- from torch.utils.data import DataLoader, TensorDataset
5
- from transformers import BertModel, AdamW
6
- from sklearn.metrics import accuracy_score
7
- import numpy as np
8
-
9
- # Import functions from the preprocessing module
10
- from transactify.data_preprocessing import preprocessing_data, split_data, read_data
11
-
12
- # Define a BERT-based classification model
13
- class BertClassifier(nn.Module):
14
- def __init__(self, num_labels, dropout_rate=0.3):
15
- super(BertClassifier, self).__init__()
16
- self.bert = BertModel.from_pretrained("bert-base-uncased")
17
- self.dropout = nn.Dropout(dropout_rate)
18
- self.classifier = nn.Linear(self.bert.config.hidden_size, num_labels)
19
-
20
- def forward(self, input_ids, attention_mask):
21
- outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
22
- pooled_output = outputs[1] # Pooler output (CLS token)
23
- output = self.dropout(pooled_output)
24
- logits = self.classifier(output)
25
- return logits
26
-
27
- # Training the model
28
- # Training the model
29
- def train_model(model, train_dataloader, val_dataloader, device, epochs=3, lr=2e-5):
30
- optimizer = AdamW(model.parameters(), lr=lr)
31
- loss_fn = nn.CrossEntropyLoss()
32
-
33
- for epoch in range(epochs):
34
- model.train()
35
- total_train_loss = 0
36
- for step, batch in enumerate(train_dataloader):
37
- b_input_ids, b_input_mask, b_labels = batch
38
-
39
- b_input_ids = b_input_ids.to(device)
40
- b_input_mask = b_input_mask.to(device)
41
- b_labels = b_labels.to(device).long() # Ensure labels are LongTensor
42
-
43
- model.zero_grad()
44
- outputs = model(b_input_ids, b_input_mask)
45
-
46
- loss = loss_fn(outputs, b_labels)
47
- total_train_loss += loss.item()
48
- loss.backward()
49
- optimizer.step()
50
-
51
- avg_train_loss = total_train_loss / len(train_dataloader)
52
- print(f"Epoch {epoch+1}, Training Loss: {avg_train_loss}")
53
-
54
- model.eval()
55
- total_val_accuracy = 0
56
- total_val_loss = 0
57
-
58
- with torch.no_grad():
59
- for batch in val_dataloader:
60
- b_input_ids, b_input_mask, b_labels = batch
61
- b_input_ids = b_input_ids.to(device)
62
- b_input_mask = b_input_mask.to(device)
63
- b_labels = b_labels.to(device)
64
-
65
- outputs = model(b_input_ids, b_input_mask)
66
- loss = loss_fn(outputs, b_labels)
67
- total_val_loss += loss.item()
68
-
69
- preds = torch.argmax(outputs, dim=1)
70
- total_val_accuracy += (preds == b_labels).sum().item()
71
-
72
- avg_val_accuracy = total_val_accuracy / len(val_dataloader.dataset)
73
- avg_val_loss = total_val_loss / len(val_dataloader)
74
- print(f"Validation Loss: {avg_val_loss}, Validation Accuracy: {avg_val_accuracy}")
75
-
76
- # Testing the model
77
- def test_model(model, test_dataloader, device):
78
- model.eval()
79
- all_preds = []
80
- all_labels = []
81
- with torch.no_grad():
82
- for batch in test_dataloader:
83
- b_input_ids, b_input_mask, b_labels = batch
84
- b_input_ids = b_input_ids.to(device)
85
- b_input_mask = b_input_mask.to(device)
86
- b_labels = b_labels.to(device)
87
-
88
- outputs = model(b_input_ids, b_input_mask)
89
- preds = torch.argmax(outputs, dim=1)
90
-
91
- all_preds.append(preds.cpu().numpy())
92
- all_labels.append(b_labels.cpu().numpy())
93
-
94
- all_preds = np.concatenate(all_preds)
95
- all_labels = np.concatenate(all_labels)
96
- accuracy = accuracy_score(all_labels, all_preds)
97
- print(f"Test Accuracy: {accuracy}")
98
-
99
- # Main function to train, validate, and test the model
100
- def main(data_path, epochs=3, batch_size=16):
101
- # Read and preprocess data
102
- data = read_data(data_path)
103
- if data is None:
104
- return
105
-
106
- input_ids, attention_masks, labels, labelencoder = preprocessing_data(data)
107
- X_train_ids, X_test_ids, X_train_masks, X_test_masks, y_train, y_test = split_data(input_ids, attention_masks, labels)
108
-
109
- # Determine the number of labels
110
- num_labels = len(labelencoder.classes_)
111
-
112
- # Create the model
113
- model = BertClassifier(num_labels)
114
-
115
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
116
- model.to(device)
117
-
118
- # Create dataloaders
119
- train_dataset = TensorDataset(X_train_ids, X_train_masks, y_train)
120
- train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
121
-
122
- val_dataset = TensorDataset(X_test_ids, X_test_masks, y_test)
123
- val_dataloader = DataLoader(val_dataset, batch_size=batch_size)
124
-
125
- # Train the model
126
- train_model(model, train_dataloader, val_dataloader, device, epochs=epochs)
127
-
128
- # Test the model
129
- test_dataloader = DataLoader(val_dataset, batch_size=batch_size)
130
- test_model(model, test_dataloader, device)
131
-
132
- if __name__ == "__main__":
133
- data_path = r"E:\transactify\transactify\Dataset\transaction_data.csv"
134
- main(data_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data_preprocessing.py CHANGED
@@ -1,12 +1,11 @@
1
- # Import Required Libraries:
2
  import numpy as np
3
  import pandas as pd
4
-
5
- import torch
6
- from transformers import BertTokenizer
7
  from sklearn.preprocessing import LabelEncoder
8
  from sklearn.model_selection import train_test_split
9
- import re
 
10
 
11
  # Read the data
12
  def read_data(path):
@@ -23,95 +22,62 @@ def read_data(path):
23
  print(f"An error occurred: {e}")
24
  return None
25
 
26
- # Path to your data file
27
- data_path = r"E:\transactify\transactify\Dataset\transaction_data.csv"
28
-
29
- # Read the data and check if it was loaded successfully
30
- data = read_data(data_path)
31
- if data is not None:
32
- print("Data loaded successfully:")
33
- print(data.head(15))
34
- else:
35
- print("Data loading failed. Exiting...")
36
- exit()
37
-
38
  # Cleaning the text
39
  def clean_text(text):
40
- text = text.lower() # Converting uppercase to lowercase
41
- text = re.sub(r"\d+", " ", text) # Removing digits in the text
42
- text = re.sub(r"[^\w\s]", " ", text) # Removing punctuations
43
  text = text.strip() # Remove extra spaces
44
  return text
45
 
46
- # Preprocessing the data
47
- def preprocessing_data(df, max_length=20):
48
- tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
 
 
 
 
 
 
 
49
 
50
- input_ids = []
51
- attention_masks = []
 
52
 
53
- # Ensure the dataframe has the required columns
54
- if "Transaction Description" not in df.columns or "Category" not in df.columns:
55
- raise ValueError("The required columns 'Transaction Description' and 'Category' are missing from the dataset.")
56
 
57
- for description in df["Transaction Description"]:
58
- cleaned_text = clean_text(description)
59
-
60
- # Debugging print statements
61
- # print(f"Original Description: {description}")
62
- # print(f"Cleaned Text: {cleaned_text}")
63
-
64
- # Only tokenize if the cleaned text is not empty
65
- if cleaned_text:
66
- encoded_dict = tokenizer.encode_plus(
67
- cleaned_text,
68
- add_special_tokens=True, # Add special tokens for BERT
69
- max_length=max_length,
70
- pad_to_max_length=True,
71
- return_attention_mask=True,
72
- return_tensors="pt",
73
- truncation=True
74
- )
75
-
76
- input_ids.append(encoded_dict['input_ids']) # Append input IDs
77
- attention_masks.append(encoded_dict['attention_mask']) # Append attention masks
78
- else:
79
- print("Cleaned text is empty, skipping...")
80
-
81
- # Debugging output to check sizes
82
- print(f"Total input_ids collected: {len(input_ids)}")
83
- print(f"Total attention_masks collected: {len(attention_masks)}")
84
 
85
- if not input_ids:
86
- raise ValueError("No input_ids were collected. Check the cleaning process.")
87
 
88
- # Concatenating the list of tensors to form a single tensor
89
- input_ids = torch.cat(input_ids, dim=0)
90
- attention_masks = torch.cat(attention_masks, dim=0)
91
-
92
- # Encoding the labels
93
- labelencoder = LabelEncoder()
94
- labels = labelencoder.fit_transform(df["Category"])
95
- labels = torch.tensor(labels, dtype=torch.long) # Convert labels to LongTensor
96
-
97
- return input_ids, attention_masks, labels, labelencoder
98
 
99
- # Split the data into train and test sets
100
- def split_data(input_ids, attention_masks, labels, test_size=0.2, random_state=42):
101
- X_train_ids, X_test_ids, y_train, y_test = train_test_split(
102
- input_ids, labels, test_size=test_size, random_state=random_state
103
- )
104
-
105
- X_train_masks, X_test_masks = train_test_split(
106
- attention_masks, test_size=test_size, random_state=random_state
107
- )
108
-
109
- return X_train_ids, X_test_ids, X_train_masks, X_test_masks, y_train, y_test
110
 
111
- # Preprocess the data and split into train and test sets
112
- input_ids, attention_masks, labels, labelencoder = preprocessing_data(data)
113
- X_train_ids, X_test_ids, X_train_masks, X_test_masks, y_train, y_test = split_data(input_ids, attention_masks, labels)
 
 
 
 
 
 
114
 
115
- # Output the sizes of the splits for confirmation
116
- print(f"Training set size: {X_train_ids.shape[0]}")
117
- print(f"Test set size: {X_test_ids.shape[0]}")
 
1
+ # data_preprocessing.py
2
  import numpy as np
3
  import pandas as pd
4
+ import re
 
 
5
  from sklearn.preprocessing import LabelEncoder
6
  from sklearn.model_selection import train_test_split
7
+ from tensorflow.keras.preprocessing.text import Tokenizer
8
+ from tensorflow.keras.preprocessing.sequence import pad_sequences
9
 
10
  # Read the data
11
  def read_data(path):
 
22
  print(f"An error occurred: {e}")
23
  return None
24
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  # Cleaning the text
26
  def clean_text(text):
27
+ text = text.lower() # Convert uppercase to lowercase
28
+ text = re.sub(r"\d+", " ", text) # Remove digits
29
+ text = re.sub(r"[^\w\s]", " ", text) # Remove punctuations
30
  text = text.strip() # Remove extra spaces
31
  return text
32
 
33
+ # Main preprocessing function
34
+ def preprocess_data(file_path, max_len=10, vocab_size=250):
35
+ # Read the data
36
+ df = read_data(file_path)
37
+ if df is None:
38
+ print("Data loading failed.")
39
+ return None, None, None, None
40
+
41
+ # Clean the text
42
+ df['Transaction Description'] = df['Transaction Description'].apply(clean_text)
43
 
44
+ # Initialize the tokenizer
45
+ tokenizer = Tokenizer(num_words=vocab_size, oov_token="<OOV>")
46
+ tokenizer.fit_on_texts(df['Transaction Description'])
47
 
48
+ # Convert texts to sequences and pad them
49
+ sequences = tokenizer.texts_to_sequences(df['Transaction Description'])
50
+ padded_sequences = pad_sequences(sequences, maxlen=max_len, padding='post', truncating='post')
51
 
52
+ # Initialize LabelEncoder and encode the labels
53
+ label_encoder = LabelEncoder()
54
+ labels = label_encoder.fit_transform(df['Category'])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
+ return padded_sequences, labels, tokenizer, label_encoder
 
57
 
58
+ # Train-test split function
59
+ def split_data(sequences, labels, test_size=0.2, random_state=42):
60
+ X_train, X_test, y_train, y_test = train_test_split(sequences, labels, test_size=test_size, random_state=random_state)
61
+ return X_train, X_test, y_train, y_test
 
 
 
 
 
 
62
 
63
+ # Main function to execute preprocessing
64
+ def main():
65
+ # Path to your data file
66
+ data_path = r"E:\transactify\transactify\Dataset\transaction_data.csv"
67
+
68
+ # Preprocess the data
69
+ sequences, labels, tokenizer, label_encoder = preprocess_data(data_path)
 
 
 
 
70
 
71
+ # Check if preprocessing succeeded
72
+ if sequences is not None:
73
+ print("Data preprocessing successful!")
74
+ # Split the data into training and testing sets
75
+ X_train, X_test, y_train, y_test = split_data(sequences, labels)
76
+ print(f"Training data shape: {X_train.shape}, Training labels shape: {y_train.shape}")
77
+ print(f"Testing data shape: {X_test.shape}, Testing labels shape: {y_test.shape}")
78
+ else:
79
+ print("Data preprocessing failed.")
80
 
81
+ # Execute the main function
82
+ if __name__ == "__main__":
83
+ main()
prediction.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # prediction.py
2
+ import numpy as np
3
+ import pandas as pd
4
+ from tensorflow.keras.models import load_model
5
+ from tensorflow.keras.preprocessing.text import Tokenizer
6
+ from tensorflow.keras.preprocessing.sequence import pad_sequences
7
+ import joblib
8
+ import re
9
+
10
+ # Function to clean the input text
11
+ def clean_text(text):
12
+ text = text.lower()
13
+ text = re.sub(r"\d+", " ", text)
14
+ text = re.sub(r"[^\w\s]", " ", text)
15
+ text = text.strip()
16
+ return text
17
+
18
+ # Load the model, tokenizer, and label encoder
19
+ def load_resources(model_path, tokenizer_path, label_encoder_path):
20
+ model = load_model(model_path)
21
+ tokenizer = joblib.load(tokenizer_path)
22
+ label_encoder = joblib.load(label_encoder_path)
23
+ return model, tokenizer, label_encoder
24
+
25
+ # Function to make predictions
26
+ def predict(model, tokenizer, label_encoder, input_text, max_len=50):
27
+ cleaned_text = clean_text(input_text)
28
+ sequence = tokenizer.texts_to_sequences([cleaned_text])
29
+ padded_sequence = pad_sequences(sequence, maxlen=max_len, padding='post', truncating='post')
30
+
31
+ # Make prediction
32
+ prediction = model.predict(padded_sequence)
33
+ predicted_class = np.argmax(prediction, axis=1)
34
+
35
+ # Decode the label
36
+ predicted_label = label_encoder.inverse_transform(predicted_class)
37
+
38
+ return predicted_label[0]
39
+
40
+ # Main function for running predictions
41
+ def main():
42
+ # Paths to your resources
43
+ model_path = 'transactify.h5' # Update with the correct path if needed
44
+ tokenizer_path = 'tokenizer.joblib' # Update with the correct path if needed
45
+ label_encoder_path = 'label_encoder.joblib' # Update with the correct path if needed
46
+
47
+ # Load resources
48
+ model, tokenizer, label_encoder = load_resources(model_path, tokenizer_path, label_encoder_path)
49
+
50
+ # Input for prediction
51
+ input_text = input("Enter a transaction description for prediction: ")
52
+ predicted_category = predict(model, tokenizer, label_encoder, input_text)
53
+ print(f"The predicted category is: {predicted_category}")
54
+
55
+ # Execute the main function
56
+ if __name__ == "__main__":
57
+ main()
requirements.txt CHANGED
@@ -1,8 +1,4 @@
1
  numpy
2
  pandas
3
  tensorflow
4
- transformers
5
  scikit-learn
6
- torch
7
- torchvision
8
- torchaudio
 
1
  numpy
2
  pandas
3
  tensorflow
 
4
  scikit-learn
 
 
 
setup.md CHANGED
@@ -1,59 +1,53 @@
1
- ## Install Git LFS
2
- ```
3
- brew install git-lfs
4
- ```
5
- or download from https://git-lfs.github.com/
6
 
7
- ## Update global git config
8
- ```
9
- $ git lfs install
10
- ```
11
-
12
- ## Update system git config
13
- ```
14
- $ git lfs install --system
15
- ```
16
-
17
- ## Clone the Repo
18
-
19
- ### Entire Clone
20
- ```
21
- git clone https://huggingface.co/webslate/transactify
22
- ```
23
-
24
- ### Light Clone
25
- ```
26
- GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/webslate/transactify
27
- ```
28
-
29
- ## For Pushing the Code
30
-
31
- > Refer to https://huggingface.co/blog/password-git-deprecation
32
-
33
- ### Set the Remote URL
34
- ```
35
- $: git remote set-url origin https://<user_name>:<token>@huggingface.co/<repo_path>
36
- ```
37
- ### Token Creation
38
-
39
- > Go to Settings > Access Tokens > Create new token >
40
- Choose Write Tab (3rd one) / go here https://huggingface.co/settings/tokens/new?tokenType=write
41
-
42
-
43
- ## Create Virtual Environment
44
-
45
- ```
46
- create a Virtual Environment for Transactify project...
47
- python -m venv transactify_venv
48
-
49
- To activate environment..
50
- go to cmd ..
51
- type >> cd transactify_venv
52
- >> cd scripts
53
- >> activate
54
- ```
55
- ## Installing Required Libaries.
56
-
57
- to install required libaries...
58
- go to cmd..
59
- type >>pip install -r requirements.txt
 
 
 
 
 
 
1
 
2
+ # Steps to Run the Model
3
+
4
+ 1. **Clone the Repository**:
5
+ Open your command line interface (CLI) and clone the repository using:
6
+ ```bash
7
+ git clone https://huggingface.co/webslate/transactify
8
+ ```
9
+
10
+ 2. **Create the Virtual Environment**:
11
+ Navigate to the project directory and create a virtual environment:
12
+ ```bash
13
+ python -m venv transactify_venv
14
+ ```
15
+
16
+ 3. **Activate the Virtual Environment**:
17
+ To activate the virtual environment, follow these steps:
18
+ - Open your command line interface (CLI).
19
+ - Type the following commands:
20
+ ```bash
21
+ cd transactify_venv
22
+ cd Scripts
23
+ activate
24
+ ```
25
+
26
+ 4. **Install Required Libraries**:
27
+ After activating the virtual environment, install the necessary libraries by typing:
28
+ ```bash
29
+ pip install -r requirements.txt
30
+ ```
31
+
32
+ 5. **Run the Data Preprocessing Code**:
33
+ Execute the data preprocessing script by typing:
34
+ ```bash
35
+ python data_preprocessing.py
36
+ ```
37
+
38
+ 6. **Run the LSTM Model Code**:
39
+ Train the LSTM model by executing:
40
+ ```bash
41
+ python LSTM_model.py
42
+ ```
43
+
44
+ 7. **Generate the H5 File**:
45
+ After training, you can generate the model file (`transactify.h5`).
46
+
47
+ 8. **Run the Prediction Code**:
48
+ To make predictions using the trained model, type:
49
+ ```bash
50
+ python prediction.py
51
+ ```
52
+
53
+ Following these steps will set up and run the Transactify model for predicting transaction categories based on descriptions.