SeoyeonPark1223 commited on
Commit
4fac4a3
·
verified ·
1 Parent(s): 5934cfc

Upload 4 files

Browse files
README.md CHANGED
@@ -1,25 +1,126 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
- - ko
5
- metrics:
6
- - accuracy
7
- - recall
8
  tags:
9
- - multimodal
10
- - audio
11
- - video
12
- - homecam
13
- - transformer
14
- datasets:
15
- - SilverAvocado/SilverDataset
 
16
  ---
17
- ## Load Model For Inference
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ```python
20
  # Hugging Face Hub에서 모델 다운로드
21
  MODEL_PATH="silver_assistant_transformer.keras"
22
- model_path = hf_hub_download(repo_id="SilverAvocado/silverAssistant", filename=MODEL_PATH)
23
 
24
  # 사용자 정의 클래스 로드
25
  model = load_model(
@@ -32,4 +133,14 @@ model = load_model(
32
 
33
  y_pred = np.argmax(model.predict([X_test1, X_test2, X_test3, X_test4]), axis=1)
34
  accuracy = accuracy_score(y_test, y_pred)
35
- print(f"Test Accuracy: {accuracy:.4f}")
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
+ - ko
4
+ - en
 
 
5
  tags:
6
+ - transformer
7
+ - video
8
+ - audio
9
+ - homecam
10
+ - multimodal
11
+ - senior
12
+ - yolo
13
+ - mediapipe
14
  ---
 
15
 
16
+ # Model Card for `Silver-Multimodal`
17
+
18
+ <!-- Provide a quick summary of what the model is/does. -->
19
+
20
+ ## Model Details
21
+
22
+ - The Silver-Multimodal model integrates both audio and video modalities for real-time situation classification.
23
+ - This architecture allows it to process diverse inputs simultaneously and identify scenarios like daily activities, violence, and fall events with high precision.
24
+ - The model leverages a Transformer-based architecture to combine features extracted from audio (MFCC) and video (MediaPipe keypoints), enabling robust multimodal learning.
25
+
26
+ - Key Highlights:
27
+ - Multimodal Integration: Combines YOLO, MediaPipe, and MFCC features for comprehensive situation understanding.
28
+ - Middle Fusion: The extracted features are fused and passed through the Transformer model for context-aware classification.
29
+ - Output Classes:
30
+ - 0 Daily Activities: Normal indoor movements like walking or sitting.
31
+ - 1 Violence: Aggressive behaviors or physical conflicts.
32
+ - 2 Fall Down: Sudden fall or collapse.
33
+
34
+ ![Multimodal Model](./pics/multimodal-overview.png)
35
+
36
+ ### Model Description
37
+
38
+ <!-- Provide a longer summary of what this model is. -->
39
+
40
+ - **Activity with:** NIPA-Google(2024.10.23-20224.11.08), Kosa Hackathon(2024.12.9)
41
+ - **Model type:** Multimodal Transformer Model
42
+ - **API used:** Keras
43
+ - **Dataset:** [HuggingFace Silver-Multimodal-Dataset](https://huggingface.co/datasets/SilverAvocado/Silver-Multimodal-Dataset)
44
+ - **Code:** [GitHub Silver Model Code](https://github.com/silverAvocado/silver-model-code)
45
+ - **Language(s) (NLP):** Korean, English
46
+
47
+ ## Training Details
48
+ ### Dataset Preperation
49
+
50
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
51
+
52
+ - **HuggingFace:** [HuggingFace Silver-Multimodal-Dataset](https://huggingface.co/datasets/SilverAvocado/Silver-Multimodal-Dataset)
53
+ - **Description:**
54
+ - The dataset is designed to support the development of machine learning models for detecting daily activities, violence, and fall down scenarios from combined audio and video sources.
55
+ - The preprocessing pipeline leverages audio feature extraction, human keypoint detection, and relative positional encoding to generate a unified representation for training and inference.
56
+ - Classes:
57
+ - 0: Daily - Normal indoor activities
58
+ - 1: Violence - Aggressive behaviors
59
+ - 2: Fall Down - Sudden falls or collapses
60
+
61
+ ### Model Description
62
+
63
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
64
+
65
+ - **Model Structure:**
66
+ ![Multimodal Model Structure](./pics/model-structure.png)
67
+
68
+ - Input Shape and Division
69
+ 1. Input Shape:
70
+ - The input shape for each branch is (N, 100, 750), where:
71
+ - N: Batch size (number of sequences in a batch).
72
+ - 100: Temporal dimension (time steps).
73
+ - 750: Feature dimension, representing extracted features for each input modality.
74
+ 2. Why Four Inputs?:
75
+ - The model processes four distinct inputs, each corresponding to a specific set of features derived from video keypoints. Here’s how they are divided:
76
+ - Input 1, Input 2, Input 3:
77
+ - For each detected individual (up to 3 people), the model extracts 30 keypoints using MediaPipe.
78
+ - Each keypoint contains 3 features (x, y, z), resulting in 30 \times 3 = 90 features per frame.
79
+ - Input 4:
80
+ - Represents relative positional coordinates calculated from the 10 most important key joints (e.g., shoulders, elbows, knees) for all 3 individuals.
81
+ - These relative coordinates capture spatial relationships among individuals, crucial for contextual understanding.
82
+
83
+ - Detailed Explanation of Architecture
84
+ 1. Positional Encoding:
85
+ - Adds temporal position information to the input embeddings, allowing the transformer to consider the sequence order.
86
+ 2. Multi-Head Attention:
87
+ - Captures interdependencies and relationships across the temporal dimension within each input.
88
+ - Ensures the model focuses on the most relevant frames or segments of the sequence.
89
+ 3. Dropout:
90
+ - Applies dropout regularization to prevent overfitting and improve generalization.
91
+ 4. LayerNormalization:
92
+ - Normalizes the output of each layer to stabilize training and accelerate convergence.
93
+ 5. Dense Layers:
94
+ - Extracts higher-level features after the attention mechanism.
95
+ - The first dense layer processes features from attention, followed by another dropout and dense layer to refine features further.
96
+ 6. AttentionPooling1D:
97
+ - Combines outputs from all four inputs into a unified representation.
98
+ - Aggregates temporal features using an attention mechanism, emphasizing the most important segments across modalities.
99
+ 7. Final Dense Layers:
100
+ - The combined representation is passed through dense layers and a softmax activation function for final classification into target classes:
101
+ - 0: Daily Activities
102
+ - 1: Violence
103
+ - 2: Fall Down
104
+
105
+ - **Model Performance:**
106
+ ![Confusion Matrix](./pics/confusion-matrix.png)
107
+
108
+ - Confusion Matrix Insights:
109
+ - Class 0 (Daily): 100% accuracy with no misclassifications.
110
+ - Class 1 (Violence): 96.96% accuracy with minimal false positives or false negatives.
111
+ - Class 2 (Fall Down): 98.67% accuracy, highlighting the model’s robustness in detecting falls.
112
+ - The overall accuracy is 98.37%, indicating the model’s reliability for real-time applications.
113
+
114
+
115
+ ## Model Usage
116
+ - `Silver Assistant` Project
117
+ - [GitHub SilverAvocado](https://github.com/silverAvocado)
118
+
119
+ ## Load Model For Inference
120
  ```python
121
  # Hugging Face Hub에서 모델 다운로드
122
  MODEL_PATH="silver_assistant_transformer.keras"
123
+ model_path = hf_hub_download(repo_id="SilverAvocado/Silver-Multimodal", filename=MODEL_PATH)
124
 
125
  # 사용자 정의 클래스 로드
126
  model = load_model(
 
133
 
134
  y_pred = np.argmax(model.predict([X_test1, X_test2, X_test3, X_test4]), axis=1)
135
  accuracy = accuracy_score(y_test, y_pred)
136
+ print(f"Test Accuracy: {accuracy:.4f}")
137
+ ```
138
+
139
+ ## Conclusion
140
+ - The Silver-Multimodal model demonstrates exceptional capabilities in multimodal learning for situation classification.
141
+ - Its ability to effectively integrate audio and video modalities ensures:
142
+ 1. High Accuracy: Consistent performance across all classes.
143
+ 2. Real-World Applicability: Suitable for applications like healthcare monitoring, safety systems, and smart homes.
144
+ 3. Scalable Architecture: Transformer-based design allows future enhancements and additional modality integration.
145
+
146
+ - This model sets a new benchmark for multimodal AI systems, empowering safety-critical projects like `Silver Assistant` with state-of-the-art situation awareness.
pics/confusion-matrix.png ADDED
pics/model-structure.png ADDED
pics/multimodal-overview.png ADDED