tech: transactify base changes
#1
by
ai-venkat-r
- opened
- .gitattributes +0 -1
- .gitignore +0 -6
- About.md +0 -64
- LSTM_model.py +0 -62
- README.md +3 -63
- __pycache__/data_preprocessing.cpython-312.pyc +0 -0
- __pycache__/datapreprocessing.cpython-312.pyc +0 -0
- __pycache__/inference.cpython-312.pyc +0 -0
- config.json +0 -35
- data_preprocessing.py +0 -83
- data_set/transaction_data.csv +0 -0
- main.py +0 -57
- model.py +0 -25
- requirements.txt +0 -4
- setup.md +40 -53
.gitattributes
CHANGED
@@ -33,4 +33,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
.gitignore
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
transactify_venv
|
2 |
-
tokenizer.joblib
|
3 |
-
label_encoder.joblib
|
4 |
-
transactify.h5
|
5 |
-
venv
|
6 |
-
.venv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
About.md
DELETED
@@ -1,64 +0,0 @@
|
|
1 |
-
Abstract for Transactify......
|
2 |
-
|
3 |
-
Transactify is an LSTM-based model designed to predict the category of online payment transactions from their descriptions.
|
4 |
-
By analyzing textual inputs like "Live concert stream on YouTube" or "Coffee at Starbucks," it classifies transactions into categories such as "Movies & Entertainment" or "Food & Dining."
|
5 |
-
This model helps users track and organize their spending across various sectors, providing better financial insights and budgeting.
|
6 |
-
Transactify is trained on real-world transaction data for improved accuracy and generalization.
|
7 |
-
|
8 |
-
Table of contents....
|
9 |
-
|
10 |
-
1.Data Collection:
|
11 |
-
The dataset consists of 5,000 transaction records generated using ChatGPT, each containing a transaction description and its corresponding category.
|
12 |
-
Example entries include descriptions like "Live concert stream on YouTube" (Movies & Entertainment) and "Coffee at Starbucks" (Food & Dining).
|
13 |
-
These records cover various spending categories such as Lifestyle, Movies & Entertainment, Food & Dining, and others.
|
14 |
-
|
15 |
-
|
16 |
-
2.Data Preprocessing:
|
17 |
-
The preprocessing step involves several natural language processing (NLP) tasks to clean and prepare the text data for model training.
|
18 |
-
These include:
|
19 |
-
Lowercasing all text.
|
20 |
-
Removing digits and punctuation using regular expressions (regex).
|
21 |
-
Tokenizing the cleaned text to convert it into a sequence of tokens.
|
22 |
-
Applying text_to_sequences to transform the tokenized words into numerical sequences.
|
23 |
-
Using pad_sequences to ensure all sequences have the same length for input into the LSTM model.
|
24 |
-
Label encoding the target categories to convert them into numerical labels.
|
25 |
-
After preprocessing, the data is split into training and testing sets to build and validate the model.
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
3.Model Building:
|
30 |
-
Embedding Layer: Converts tokenized transaction descriptions into dense vectors, capturing word semantics and relationships.
|
31 |
-
|
32 |
-
LSTM Layer: Learns sequential patterns from the embedded text, helping the model understand the context and relationships between words over time.
|
33 |
-
|
34 |
-
Dropout Layer: Introduces regularization by randomly turning off neurons during training, reducing overfitting and improving the model's generalization.
|
35 |
-
|
36 |
-
Dense Layer with Softmax Activation: Outputs a probability distribution across categories, allowing the model to predict the correct category for each transaction description.
|
37 |
-
|
38 |
-
Model Compilation: Compiled with the Adam optimizer for efficient learning, sparse categorical cross-entropy loss for multi-class classification, and accuracy as the evaluation metric.
|
39 |
-
|
40 |
-
Model Training: The model is trained for 50 epochs with a batch size of 8, using a validation set to monitor performance and adjust during training.
|
41 |
-
|
42 |
-
Saving the Model and Preprocessing Objects:
|
43 |
-
|
44 |
-
The trained model is saved as transactify.h5 for future use.
|
45 |
-
The tokenizer and label encoder used during preprocessing are saved using joblib as tokenizer.joblib and label_encoder.joblib, respectively,
|
46 |
-
ensuring they can be reused for consistent tokenization and label encoding when making predictions on new data.
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
4.Prediction:
|
51 |
-
Once trained, the model is used to predict the category of new transaction descriptions.
|
52 |
-
The output provides the category label, enabling users to classify their spending based on transaction descriptions.
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
5.Conclusion:
|
57 |
-
The Transactify model effectively categorizes transaction descriptions using LSTM networks.
|
58 |
-
However, to improve the accuracy and reliability of predictions, a larger and more diverse dataset is necessary.
|
59 |
-
Expanding the dataset will help the model generalize better across various spending behaviors and conditions.
|
60 |
-
This enhancement will lead to more precise predictions, enabling users to gain deeper insights into their spending patterns.
|
61 |
-
Future work should focus on collecting additional data to refine the model's performance and applicability in real-world scenarios.
|
62 |
-
|
63 |
-
|
64 |
-
![Excepted Output:](result.gif)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
LSTM_model.py
DELETED
@@ -1,62 +0,0 @@
|
|
1 |
-
# LSTM_model.py
|
2 |
-
import numpy as np
|
3 |
-
from tensorflow.keras.models import Sequential
|
4 |
-
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout
|
5 |
-
from data_preprocessing import preprocess_data, split_data
|
6 |
-
import joblib # To save the tokenizer and label encoder
|
7 |
-
|
8 |
-
# Define the LSTM model
|
9 |
-
def build_lstm_model(vocab_size, embedding_dim=64, max_len=10, lstm_units=128, dropout_rate=0.2, output_units=6):
|
10 |
-
model = Sequential()
|
11 |
-
model.add(Embedding(input_dim=vocab_size, output_dim=embedding_dim, input_length=max_len))
|
12 |
-
model.add(LSTM(units=lstm_units, return_sequences=False))
|
13 |
-
model.add(Dropout(dropout_rate))
|
14 |
-
model.add(Dense(units=output_units, activation='softmax'))
|
15 |
-
|
16 |
-
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
|
17 |
-
|
18 |
-
return model
|
19 |
-
|
20 |
-
# Main function to execute the training process
|
21 |
-
def main():
|
22 |
-
# Path to your data file
|
23 |
-
data_path = r"E:\transactify\transactify\transactify\transactify\transactify\data_set\transaction_data.csv"
|
24 |
-
|
25 |
-
# Preprocess the data
|
26 |
-
sequences, labels, tokenizer, label_encoder = preprocess_data(data_path)
|
27 |
-
|
28 |
-
# Check if preprocessing succeeded
|
29 |
-
if sequences is not None:
|
30 |
-
print("Data preprocessing successful!")
|
31 |
-
|
32 |
-
# Split the data into training and testing sets
|
33 |
-
X_train, X_test, y_train, y_test = split_data(sequences, labels)
|
34 |
-
print(f"Training data shape: {X_train.shape}, Training labels shape: {y_train.shape}")
|
35 |
-
print(f"Testing data shape: {X_test.shape}, Testing labels shape: {y_test.shape}")
|
36 |
-
|
37 |
-
# Build the LSTM model
|
38 |
-
vocab_size = tokenizer.num_words + 1 # +1 for padding token
|
39 |
-
model = build_lstm_model(vocab_size, max_len=10, output_units=len(label_encoder.classes_))
|
40 |
-
|
41 |
-
# Train the model
|
42 |
-
model.fit(X_train, y_train, epochs=50, batch_size=8, validation_data=(X_test, y_test))
|
43 |
-
|
44 |
-
# Evaluate the model
|
45 |
-
loss, accuracy = model.evaluate(X_test, y_test)
|
46 |
-
print(f"Test Loss: {loss:.4f}, Test Accuracy: {accuracy:.4f}")
|
47 |
-
|
48 |
-
# Save the model
|
49 |
-
model.save('transactify.h5')
|
50 |
-
print("Model saved as 'transactify.h5'")
|
51 |
-
|
52 |
-
# Save the tokenizer and label encoder
|
53 |
-
joblib.dump(tokenizer, 'tokenizer.joblib')
|
54 |
-
joblib.dump(label_encoder, 'label_encoder.joblib')
|
55 |
-
print("Tokenizer and LabelEncoder saved as 'tokenizer.joblib' and 'label_encoder.joblib'")
|
56 |
-
|
57 |
-
else:
|
58 |
-
print("Data preprocessing failed.")
|
59 |
-
|
60 |
-
# Execute the main function
|
61 |
-
if __name__ == "__main__":
|
62 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -1,63 +1,3 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
4 |
-
- en
|
5 |
-
---
|
6 |
-
|
7 |
-
## What is Transactify?
|
8 |
-
Transactify is an LSTM-based model designed to predict the category of online payment transactions from their descriptions.
|
9 |
-
By analyzing textual inputs like "Live concert stream on YouTube" or "Coffee at Starbucks," it classifies transactions into categories such as "Movies & Entertainment" or "Food & Dining."
|
10 |
-
This model helps users track and organize their spending across various sectors, providing better financial insights and budgeting.
|
11 |
-
Transactify is trained on real-world transaction data for improved accuracy and generalization.
|
12 |
-
|
13 |
-
## Table of contents
|
14 |
-
## 1. Data Collection
|
15 |
-
The dataset consists of **5,000 transaction records** generated using ChatGPT, each containing a transaction description and its corresponding category.
|
16 |
-
|
17 |
-
Example entries include:
|
18 |
-
- "Live concert stream on YouTube" (Movies & Entertainment)
|
19 |
-
- "Coffee at Starbucks" (Food & Dining)
|
20 |
-
|
21 |
-
These records cover various spending categories such as **Lifestyle**, **Movies & Entertainment**, **Food & Dining**, and others.
|
22 |
-
|
23 |
-
---
|
24 |
-
|
25 |
-
## 2. Data Preprocessing
|
26 |
-
The preprocessing step involves several natural language processing (NLP) tasks to clean and prepare the text data for model training. These include:
|
27 |
-
|
28 |
-
- Lowercasing all text.
|
29 |
-
- Removing digits and punctuation using regular expressions (regex).
|
30 |
-
- Tokenizing the cleaned text to convert it into a sequence of tokens.
|
31 |
-
- Applying `text_to_sequences` to transform the tokenized words into numerical sequences.
|
32 |
-
- Using `pad_sequences` to ensure all sequences have the same length for input into the LSTM model.
|
33 |
-
- Label encoding the target categories to convert them into numerical labels.
|
34 |
-
|
35 |
-
After preprocessing, the data is split into training and testing sets to build and validate the model.
|
36 |
-
|
37 |
-
---
|
38 |
-
|
39 |
-
## 3. Model Building
|
40 |
-
- **Embedding Layer**: Converts tokenized transaction descriptions into dense vectors, capturing word semantics and relationships.
|
41 |
-
|
42 |
-
- **LSTM Layer**: Learns sequential patterns from the embedded text, helping the model understand the context and relationships between words over time.
|
43 |
-
|
44 |
-
- **Dropout Layer**: Introduces regularization by randomly turning off neurons during training, reducing overfitting and improving the model's generalization.
|
45 |
-
|
46 |
-
- **Dense Layer with Softmax Activation**: Outputs a probability distribution across categories, allowing the model to predict the correct category for each transaction description.
|
47 |
-
|
48 |
-
### Model Compilation
|
49 |
-
- Compiled with the Adam optimizer for efficient learning.
|
50 |
-
- Sparse categorical cross-entropy loss for multi-class classification.
|
51 |
-
- Accuracy as the evaluation metric.
|
52 |
-
|
53 |
-
### Model Training
|
54 |
-
The model is trained for **50 epochs** with a batch size of **8**, using a validation set to monitor performance and adjust during training.
|
55 |
-
|
56 |
-
### Saving the Model and Preprocessing Objects
|
57 |
-
- The trained model is saved as `transactify.h5` for future use.
|
58 |
-
- The tokenizer and label encoder used during preprocessing are saved using joblib as `tokenizer.joblib` and `label_encoder.joblib`, respectively, ensuring they can be reused for consistent tokenization and label encoding when making predictions on new data.
|
59 |
-
|
60 |
-
---
|
61 |
-
|
62 |
-
## 4. Prediction
|
63 |
-
Once trained
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
__pycache__/data_preprocessing.cpython-312.pyc
DELETED
Binary file (3.55 kB)
|
|
__pycache__/datapreprocessing.cpython-312.pyc
DELETED
Binary file (4.3 kB)
|
|
__pycache__/inference.cpython-312.pyc
DELETED
Binary file (2.21 kB)
|
|
config.json
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"model_type": "custom",
|
3 |
-
"architectures": ["LSTM"],
|
4 |
-
"library_name": "tensorflow",
|
5 |
-
"task_specific_params": {
|
6 |
-
"text-classification": {
|
7 |
-
"vocab_size": 500,
|
8 |
-
"embedding_dim": 64,
|
9 |
-
"hidden_size": 64,
|
10 |
-
"num_layers": 2,
|
11 |
-
"dropout_rate": 0.2,
|
12 |
-
"max_sequence_length": 10
|
13 |
-
}
|
14 |
-
},
|
15 |
-
"training_params": {
|
16 |
-
"batch_size": 8,
|
17 |
-
"epochs": 50,
|
18 |
-
"loss_function": "sparse_categorical_crossentropy",
|
19 |
-
"optimizer": "adam",
|
20 |
-
"metrics": ["accuracy"]
|
21 |
-
},
|
22 |
-
"train_data_size": 5000,
|
23 |
-
"id2label": {
|
24 |
-
"0": "Lifestyle",
|
25 |
-
"1": "Movies & Entertainment",
|
26 |
-
"2": "Food & Dining",
|
27 |
-
"3": "Others"
|
28 |
-
},
|
29 |
-
"label2id": {
|
30 |
-
"Lifestyle": 0,
|
31 |
-
"Movies & Entertainment": 1,
|
32 |
-
"Food & Dining": 2,
|
33 |
-
"Others": 3
|
34 |
-
}
|
35 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data_preprocessing.py
DELETED
@@ -1,83 +0,0 @@
|
|
1 |
-
# data_preprocessing.py
|
2 |
-
import numpy as np
|
3 |
-
import pandas as pd
|
4 |
-
import re
|
5 |
-
from sklearn.preprocessing import LabelEncoder
|
6 |
-
from sklearn.model_selection import train_test_split
|
7 |
-
from tensorflow.keras.preprocessing.text import Tokenizer
|
8 |
-
from tensorflow.keras.preprocessing.sequence import pad_sequences
|
9 |
-
|
10 |
-
# Read the data
|
11 |
-
def read_data(path):
|
12 |
-
try:
|
13 |
-
df = pd.read_csv(path)
|
14 |
-
if df.empty:
|
15 |
-
print("The file is empty.")
|
16 |
-
return None
|
17 |
-
return df
|
18 |
-
except FileNotFoundError:
|
19 |
-
print(f"File not found at: {path}")
|
20 |
-
return None
|
21 |
-
except Exception as e:
|
22 |
-
print(f"An error occurred: {e}")
|
23 |
-
return None
|
24 |
-
|
25 |
-
# Cleaning the text
|
26 |
-
def clean_text(text):
|
27 |
-
text = text.lower() # Convert uppercase to lowercase
|
28 |
-
text = re.sub(r"\d+", " ", text) # Remove digits
|
29 |
-
text = re.sub(r"[^\w\s]", " ", text) # Remove punctuations
|
30 |
-
text = text.strip() # Remove extra spaces
|
31 |
-
return text
|
32 |
-
|
33 |
-
# Main preprocessing function
|
34 |
-
def preprocess_data(file_path, max_len=10, vocab_size=250):
|
35 |
-
# Read the data
|
36 |
-
df = read_data(file_path)
|
37 |
-
if df is None:
|
38 |
-
print("Data loading failed.")
|
39 |
-
return None, None, None, None
|
40 |
-
|
41 |
-
# Clean the text
|
42 |
-
df['Transaction Description'] = df['Transaction Description'].apply(clean_text)
|
43 |
-
|
44 |
-
# Initialize the tokenizer
|
45 |
-
tokenizer = Tokenizer(num_words=vocab_size, oov_token="<OOV>")
|
46 |
-
tokenizer.fit_on_texts(df['Transaction Description'])
|
47 |
-
|
48 |
-
# Convert texts to sequences and pad them
|
49 |
-
sequences = tokenizer.texts_to_sequences(df['Transaction Description'])
|
50 |
-
padded_sequences = pad_sequences(sequences, maxlen=max_len, padding='post', truncating='post')
|
51 |
-
|
52 |
-
# Initialize LabelEncoder and encode the labels
|
53 |
-
label_encoder = LabelEncoder()
|
54 |
-
labels = label_encoder.fit_transform(df['Category'])
|
55 |
-
|
56 |
-
return padded_sequences, labels, tokenizer, label_encoder
|
57 |
-
|
58 |
-
# Train-test split function
|
59 |
-
def split_data(sequences, labels, test_size=0.2, random_state=42):
|
60 |
-
X_train, X_test, y_train, y_test = train_test_split(sequences, labels, test_size=test_size, random_state=random_state)
|
61 |
-
return X_train, X_test, y_train, y_test
|
62 |
-
|
63 |
-
# Main function to execute preprocessing
|
64 |
-
def main():
|
65 |
-
# Path to your data file
|
66 |
-
data_path = r"E:\transactify\transactify\Dataset\transaction_data.csv"
|
67 |
-
|
68 |
-
# Preprocess the data
|
69 |
-
sequences, labels, tokenizer, label_encoder = preprocess_data(data_path)
|
70 |
-
|
71 |
-
# Check if preprocessing succeeded
|
72 |
-
if sequences is not None:
|
73 |
-
print("Data preprocessing successful!")
|
74 |
-
# Split the data into training and testing sets
|
75 |
-
X_train, X_test, y_train, y_test = split_data(sequences, labels)
|
76 |
-
print(f"Training data shape: {X_train.shape}, Training labels shape: {y_train.shape}")
|
77 |
-
print(f"Testing data shape: {X_test.shape}, Testing labels shape: {y_test.shape}")
|
78 |
-
else:
|
79 |
-
print("Data preprocessing failed.")
|
80 |
-
|
81 |
-
# Execute the main function
|
82 |
-
if __name__ == "__main__":
|
83 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data_set/transaction_data.csv
DELETED
The diff for this file is too large to render.
See raw diff
|
|
main.py
DELETED
@@ -1,57 +0,0 @@
|
|
1 |
-
# main.py
|
2 |
-
import numpy as np
|
3 |
-
import pandas as pd
|
4 |
-
from tensorflow.keras.models import load_model
|
5 |
-
from tensorflow.keras.preprocessing.text import Tokenizer
|
6 |
-
from tensorflow.keras.preprocessing.sequence import pad_sequences
|
7 |
-
import joblib
|
8 |
-
import re
|
9 |
-
|
10 |
-
# Function to clean the input text
|
11 |
-
def clean_text(text):
|
12 |
-
text = text.lower()
|
13 |
-
text = re.sub(r"\d+", " ", text)
|
14 |
-
text = re.sub(r"[^\w\s]", " ", text)
|
15 |
-
text = text.strip()
|
16 |
-
return text
|
17 |
-
|
18 |
-
# Load the model, tokenizer, and label encoder
|
19 |
-
def load_resources(model_path, tokenizer_path, label_encoder_path):
|
20 |
-
model = load_model(model_path)
|
21 |
-
tokenizer = joblib.load(tokenizer_path)
|
22 |
-
label_encoder = joblib.load(label_encoder_path)
|
23 |
-
return model, tokenizer, label_encoder
|
24 |
-
|
25 |
-
# Function to make predictions
|
26 |
-
def predict(model, tokenizer, label_encoder, input_text, max_len=50):
|
27 |
-
cleaned_text = clean_text(input_text)
|
28 |
-
sequence = tokenizer.texts_to_sequences([cleaned_text])
|
29 |
-
padded_sequence = pad_sequences(sequence, maxlen=max_len, padding='post', truncating='post')
|
30 |
-
|
31 |
-
# Make prediction
|
32 |
-
prediction = model.predict(padded_sequence)
|
33 |
-
predicted_class = np.argmax(prediction, axis=1)
|
34 |
-
|
35 |
-
# Decode the label
|
36 |
-
predicted_label = label_encoder.inverse_transform(predicted_class)
|
37 |
-
|
38 |
-
return predicted_label[0]
|
39 |
-
|
40 |
-
# Main function for running predictions
|
41 |
-
def main():
|
42 |
-
# Paths to your resources
|
43 |
-
model_path = 'transactify.h5' # Update with the correct path if needed
|
44 |
-
tokenizer_path = 'tokenizer.joblib' # Update with the correct path if needed
|
45 |
-
label_encoder_path = 'label_encoder.joblib' # Update with the correct path if needed
|
46 |
-
|
47 |
-
# Load resources
|
48 |
-
model, tokenizer, label_encoder = load_resources(model_path, tokenizer_path, label_encoder_path)
|
49 |
-
|
50 |
-
# Input for prediction
|
51 |
-
input_text = input("Enter a transaction description for prediction: ")
|
52 |
-
predicted_category = predict(model, tokenizer, label_encoder, input_text)
|
53 |
-
print(f"The predicted category is: {predicted_category}")
|
54 |
-
|
55 |
-
# Execute the main function
|
56 |
-
if __name__ == "__main__":
|
57 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
model.py
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
from tensorflow.keras.models import load_model
|
2 |
-
import joblib
|
3 |
-
from tensorflow.keras.preprocessing.sequence import pad_sequences
|
4 |
-
import numpy as np
|
5 |
-
import re
|
6 |
-
|
7 |
-
# Load the model, tokenizer, and label encoder
|
8 |
-
model = load_model("transactify.h5")
|
9 |
-
tokenizer = joblib.load("tokenizer.joblib")
|
10 |
-
label_encoder = joblib.load("label_encoder.joblib")
|
11 |
-
|
12 |
-
def clean_text(text):
|
13 |
-
text = text.lower()
|
14 |
-
text = re.sub(r"\d+", "", text)
|
15 |
-
text = re.sub(r"[^\w\s]", "", text)
|
16 |
-
return text.strip()
|
17 |
-
|
18 |
-
def predict(text):
|
19 |
-
cleaned_text = clean_text(text)
|
20 |
-
sequence = tokenizer.texts_to_sequences([cleaned_text])
|
21 |
-
padded_sequence = pad_sequences(sequence, maxlen=100)
|
22 |
-
prediction = model.predict(padded_sequence)
|
23 |
-
predicted_label = np.argmax(prediction, axis=1)
|
24 |
-
category = label_encoder.inverse_transform(predicted_label)
|
25 |
-
return {"category": category[0]}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
requirements.txt
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
numpy
|
2 |
-
pandas
|
3 |
-
tensorflow
|
4 |
-
scikit-learn
|
|
|
|
|
|
|
|
|
|
setup.md
CHANGED
@@ -1,53 +1,40 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
python LSTM_model.py
|
42 |
-
```
|
43 |
-
|
44 |
-
7. **Generate the H5 File**:
|
45 |
-
After training, you can generate the model file (`transactify.h5`).
|
46 |
-
|
47 |
-
8. **Run the Prediction Code**:
|
48 |
-
To make predictions using the trained model, type:
|
49 |
-
```bash
|
50 |
-
python main.py
|
51 |
-
```
|
52 |
-
|
53 |
-
Following these steps will set up and run the Transactify model for predicting transaction categories based on descriptions.
|
|
|
1 |
+
## Install Git LFS
|
2 |
+
```
|
3 |
+
brew install git-lfs
|
4 |
+
```
|
5 |
+
or download from https://git-lfs.github.com/
|
6 |
+
|
7 |
+
## Update global git config
|
8 |
+
```
|
9 |
+
$ git lfs install
|
10 |
+
```
|
11 |
+
|
12 |
+
## Update system git config
|
13 |
+
```
|
14 |
+
$ git lfs install --system
|
15 |
+
```
|
16 |
+
|
17 |
+
## Clone the Repo
|
18 |
+
|
19 |
+
### Entire Clone
|
20 |
+
```
|
21 |
+
git clone https://huggingface.co/webslate/transactify
|
22 |
+
```
|
23 |
+
|
24 |
+
### Light Clone
|
25 |
+
```
|
26 |
+
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/webslate/transactify
|
27 |
+
```
|
28 |
+
|
29 |
+
## For Pushing the Code
|
30 |
+
|
31 |
+
> Refer to https://huggingface.co/blog/password-git-deprecation
|
32 |
+
|
33 |
+
### Set the Remote URL
|
34 |
+
```
|
35 |
+
$: git remote set-url origin https://<user_name>:<token>@huggingface.co/<repo_path>
|
36 |
+
```
|
37 |
+
### Token Creation
|
38 |
+
|
39 |
+
> Go to Settings > Access Tokens > Create new token >
|
40 |
+
Choose Write Tab (3rd one) / go here https://huggingface.co/settings/tokens/new?tokenType=write
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|