MatteoFasulo commited on
Commit
b9adb22
·
1 Parent(s): b7ab8d4

New [DEV] version

Browse files
.gitignore CHANGED
@@ -162,6 +162,9 @@ cython_debug/
162
  # media folder
163
  media/*
164
 
 
 
 
165
  # output folder
166
  output/*
167
 
 
162
  # media folder
163
  media/*
164
 
165
+ # background folder
166
+ background/*
167
+
168
  # output folder
169
  output/*
170
 
.streamlit/config.toml CHANGED
@@ -3,4 +3,7 @@ primaryColor="#BD93F9"
3
  backgroundColor="#282A36"
4
  secondaryBackgroundColor="#44475A"
5
  textColor="#F8F8F2"
6
- font="sans serif"
 
 
 
 
3
  backgroundColor="#282A36"
4
  secondaryBackgroundColor="#44475A"
5
  textColor="#F8F8F2"
6
+ font="sans serif"
7
+
8
+ [client]
9
+ showSidebarNavigation = false
CONTRIBUTING.md CHANGED
@@ -37,4 +37,4 @@ Our project has a code of conduct to ensure that all contributors feel welcome a
37
 
38
  ## Conclusion
39
 
40
- We appreciate your interest in contributing to our project and look forward to your contributions. If you have any questions or need any help, please don't hesitate to reach out to us through the issue tracker or by email.
 
37
 
38
  ## Conclusion
39
 
40
+ We appreciate your interest in contributing to our project and look forward to your contributions. If you have any questions or need any help, please don't hesitate to reach out to us through the issue tracker or by email.
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [2023] [Matteo Fasulo]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,15 +1,3 @@
1
- ---
2
- title: Whisper TikTok Demo
3
- emoji: 📚
4
- colorFrom: yellow
5
- colorTo: purple
6
- sdk: streamlit
7
- sdk_version: 1.36.0
8
- app_file: Home.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
  # Introducing Whisper-TikTok 🤖🎥
14
 
15
  ## Star History
@@ -48,17 +36,13 @@ Discover Whisper-TikTok, an innovative AI-powered tool that leverages the prowes
48
 
49
  ## How it Works
50
 
51
- Employing Whisper-TikTok is a breeze: simply modify the [video.json](video.json). The JSON file contains the following fields:
52
 
53
  - `series`: The name of the series.
54
  - `part`: The part number of the video.
55
  - `text`: The text to be spoken in the video.
56
- - `outro`: The outro text to be spoken in the video.
57
  - `tags`: The tags to be used for the video.
58
-
59
- Summarizing the program's functionality:
60
-
61
- > Furnished with a structured JSON dataset containing details such as the **series name**, **video part number**, **video text** and **outro text**, the program orchestrates the synthesis of a video incorporating the provided text and outro. Subsequently, the generated video is stored within the designated `output` folder.
62
 
63
  <details>
64
  <summary>Details</summary>
@@ -80,8 +64,8 @@ The program conducts the **sequence of actions** outlined below:
80
 
81
  ## Web App (Online)
82
 
83
- There is a Web App hosted thanks to Streamlit which is public available, just click on the link that will take you directly to the Web App.
84
- > https://convert.streamlit.app
85
 
86
  ## Local Installation
87
 
@@ -127,14 +111,14 @@ choco install ffmpeg
127
  scoop install ffmpeg
128
  ```
129
 
130
- >Please note that for optimal performance, it's advisable to have a GPU when using the OpenAI Whisper model for speech recognition. However, the program will work without a GPU, but it will run more slowly. This performance difference is because GPUs efficiently handle fp16 computation, while CPUs use fp32 or fp64 (depending on your machine), which are slower.
131
 
132
  ## Web-UI (Local)
133
 
134
  To run the Web-UI locally, execute the following command within your terminal:
135
 
136
  ```bash
137
- streamlit run app.py --server.port=8501 --server.address=0.0.0.0
138
  ```
139
 
140
  ## Command-Line
@@ -196,7 +180,7 @@ python main.py --url https://www.youtube.com/watch?v=dQw4w9WgXcQ --tts en-US-Jen
196
 
197
  - Modify the font color of the subtitles:
198
 
199
- ```
200
  python main.py --sub_format b --font_color #FFF000 --tts en-US-JennyNeural
201
  ```
202
 
@@ -214,36 +198,25 @@ edge-tts --list-voices
214
 
215
  ## Additional Resources
216
 
217
- ### Accelerate Video Creation
218
- > Contributed by [@duozokker](<https://github.com/duozokker>)
219
-
220
- **reddit2json** is a Python script that transforms Reddit post URLs into a JSON file, streamlining the process of creating video.json files. This tool not only converts Reddit links but also offers functionalities such as translating Reddit post content using DeepL and modifying content through custom OpenAI GPT calls.
221
-
222
- #### reddit2json: Directly Convert Reddit Links to JSON
223
-
224
- reddit2json is designed to process a list of Reddit post URLs, converting them into a JSON format that can be used directly for video creation. This tool enhances the video creation process by providing a faster and more efficient way to generate video.json files.
225
-
226
- [Here is the detailed README for reddit2json](https://github.com/duozokker/reddit2json/blob/main/README.md) which includes instructions for installation, setting up the .env file, example calls, and more.
227
-
228
- ## Code of Conduct
229
 
230
  Please review our [Code of Conduct](./CODE_OF_CONDUCT.md) before contributing to Whisper-TikTok.
231
 
232
- ## Contributing
233
 
234
  We welcome contributions from the community! Please see our [Contributing Guidelines](./CONTRIBUTING.md) for more information.
235
 
236
- ## Upcoming Features
237
 
238
  - Integration with the OpenAI API to generate more advanced responses.
239
  - Generate content by extracting it from reddit <https://github.com/MatteoFasulo/Whisper-TikTok/issues/22>
240
 
241
- ## Acknowledgments
242
 
243
  - We'd like to give a huge thanks to [@rany2](https://www.github.com/rany2) for their [edge-tts](https://github.com/rany2/edge-tts) package, which made it possible to use the Microsoft Edge Cloud TTS API with Whisper-TikTok.
244
  - We also acknowledge the contributions of the Whisper model by [@OpenAI](https://github.com/openai/whisper) for robust speech recognition via large-scale weak supervision
245
  - Also [@jianfch](https://github.com/jianfch/stable-ts) for the stable-ts package, which made it possible to use the OpenAI Whisper model with Whisper-TikTok in a stable manner with font color and subtitle format options.
246
 
247
- ## License
248
 
249
  Whisper-TikTok is licensed under the [Apache License, Version 2.0](https://github.com/MatteoFasulo/Whisper-TikTok/blob/main/LICENSE).
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Introducing Whisper-TikTok 🤖🎥
2
 
3
  ## Star History
 
36
 
37
  ## How it Works
38
 
39
+ Employing Whisper-TikTok is a breeze: simply modify the [clips.csv](clips.csv). The CSV file contains the following attributes:
40
 
41
  - `series`: The name of the series.
42
  - `part`: The part number of the video.
43
  - `text`: The text to be spoken in the video.
 
44
  - `tags`: The tags to be used for the video.
45
+ - `outro`: The outro text to be spoken in the video.
 
 
 
46
 
47
  <details>
48
  <summary>Details</summary>
 
64
 
65
  ## Web App (Online)
66
 
67
+ There is a Web App hosted thanks to Streamlit which is public available in HuggingFace, just click on the link that will take you directly to the Web App.
68
+ > https://huggingface.co/spaces/MatteoFasulo/Whisper-TikTok-Demo
69
 
70
  ## Local Installation
71
 
 
111
  scoop install ffmpeg
112
  ```
113
 
114
+ > Please note that for optimal performance, it's advisable to have a GPU when using the OpenAI Whisper model for Automatic Speech Recognition (ASR). However, the program will also work without a GPU, but it will run more slowly.
115
 
116
  ## Web-UI (Local)
117
 
118
  To run the Web-UI locally, execute the following command within your terminal:
119
 
120
  ```bash
121
+ streamlit run app.py
122
  ```
123
 
124
  ## Command-Line
 
180
 
181
  - Modify the font color of the subtitles:
182
 
183
+ ```bash
184
  python main.py --sub_format b --font_color #FFF000 --tts en-US-JennyNeural
185
  ```
186
 
 
198
 
199
  ## Additional Resources
200
 
201
+ ### Code of Conduct
 
 
 
 
 
 
 
 
 
 
 
202
 
203
  Please review our [Code of Conduct](./CODE_OF_CONDUCT.md) before contributing to Whisper-TikTok.
204
 
205
+ ### Contributing
206
 
207
  We welcome contributions from the community! Please see our [Contributing Guidelines](./CONTRIBUTING.md) for more information.
208
 
209
+ ### Upcoming Features
210
 
211
  - Integration with the OpenAI API to generate more advanced responses.
212
  - Generate content by extracting it from reddit <https://github.com/MatteoFasulo/Whisper-TikTok/issues/22>
213
 
214
+ ### Acknowledgments
215
 
216
  - We'd like to give a huge thanks to [@rany2](https://www.github.com/rany2) for their [edge-tts](https://github.com/rany2/edge-tts) package, which made it possible to use the Microsoft Edge Cloud TTS API with Whisper-TikTok.
217
  - We also acknowledge the contributions of the Whisper model by [@OpenAI](https://github.com/openai/whisper) for robust speech recognition via large-scale weak supervision
218
  - Also [@jianfch](https://github.com/jianfch/stable-ts) for the stable-ts package, which made it possible to use the OpenAI Whisper model with Whisper-TikTok in a stable manner with font color and subtitle format options.
219
 
220
+ ### License
221
 
222
  Whisper-TikTok is licensed under the [Apache License, Version 2.0](https://github.com/MatteoFasulo/Whisper-TikTok/blob/main/LICENSE).
Home.py → app.py RENAMED
@@ -1,6 +1,5 @@
1
- import os
2
  import sys
3
- import json
4
  from pathlib import Path
5
  import asyncio
6
  import platform
@@ -10,12 +9,11 @@ import edge_tts
10
  import streamlit as st
11
  import pandas as pd
12
 
13
- from src.video_creator import VideoCreator
14
  from utils import rgb_to_bgr
15
 
16
  result = None
17
 
18
-
19
  async def generate_video(
20
  model,
21
  tts_voice,
@@ -27,9 +25,8 @@ async def generate_video(
27
  non_english,
28
  upload_tiktok,
29
  verbose,
30
- video_json,
31
- background_tab,
32
  video_num,
 
33
  max_words,
34
  *args,
35
  **kwargs):
@@ -49,18 +46,18 @@ async def generate_video(
49
  max_words=max_words
50
  )
51
 
52
- async def get_video(video_data, args):
53
  with st.status("Generating video...", expanded=False) as status:
54
- video_creator = VideoCreator(video_data, args)
55
 
56
  status.update(label="Downloading video...")
57
- video_creator.download_video()
58
 
59
  status.update(label="Loading model...")
60
  video_creator.load_model()
61
 
62
  status.update(label="Creating text...")
63
- video_creator.create_text()
64
 
65
  status.update(label="Generating audio...")
66
  await video_creator.text_to_speech()
@@ -73,6 +70,7 @@ async def generate_video(
73
 
74
  status.update(label="Integrating subtitles...")
75
  video_creator.integrate_subtitles()
 
76
 
77
  if upload_tiktok:
78
  status.update(label="Uploading to TikTok...")
@@ -82,43 +80,24 @@ async def generate_video(
82
  state="complete", expanded=False)
83
  return str(video_creator.mp4_final_video)
84
 
85
- tasks = [get_video(video_json[i], args)
86
- for i, name in enumerate(video_num)]
87
- results = await asyncio.gather(*tasks)
88
-
89
- if len(results) == 1:
90
- return results[0]
91
 
 
 
92
  else:
93
- return results[-1]
94
-
95
 
96
  @st.cache_data
97
- def json_to_df(json_file):
98
- return pd.read_json(json_file)
99
 
100
 
101
  @st.cache_data
102
- def df_to_json(df):
103
- try:
104
- # Convert the DataFrame to a JSON string
105
- json_str = df.to_json(orient='records', indent=4, force_ascii=False)
106
-
107
- # raise an error if the dataframe has no rows (at least one is required)
108
- if df.shape[0] == 0:
109
- st.error("You must add at least one video to the JSON")
110
- return
111
-
112
- # Save the JSON string to a file
113
- with open('video.json', 'w', encoding='UTF-8') as f:
114
- f.write(json_str)
115
-
116
- st.success("JSON saved successfully!")
117
-
118
- except ValueError as e:
119
- st.error("You must fill all the fields in the JSON")
120
- except Exception as e:
121
- st.error(f"Error saving JSON: {e}")
122
 
123
 
124
  # Streamlit Config
@@ -148,12 +127,12 @@ async def main():
148
  st.title("🏆 Whisper-TikTok 🚀")
149
  st.write("Create a TikTok video with text-to-speech of Microsoft Edge's TTS and subtitles of Whisper model.")
150
 
151
- st.subheader("JSON Editor", help="Here you can edit the JSON file with the videos. Copy-and-paste is supported and compatible with Google Sheets, Excel, and others. You can do bulk-editing by dragging the handle on a cell (similar to Excel)!")
152
- st.write("ℹ️ The JSON file is saved automatically when you click the button below. Every time you edit the JSON file, you must click the button to save the changes otherwise they will be lost.")
153
- edited_df = st.data_editor(json_to_df('video.json'),
154
  num_rows="dynamic")
155
- st.button("Save JSON", on_click=df_to_json, args=(
156
- edited_df,), help="Save the JSON file with the videos")
157
 
158
  st.divider()
159
 
@@ -164,15 +143,14 @@ async def main():
164
  with st.expander("ℹ️ How to use"):
165
  st.write(
166
  """
167
- 1. Choose the video to generate using the dropdown menu.
168
  2. Choose the model to use for the subtitles.
169
  3. Choose the voice to use for the text-to-speech.
170
- 4. Choose the background video to use for the TikTok video.
171
  5. Choose the position of the subtitles.
172
  6. Choose the font, font color, and font size for the subtitles.
173
- 7. Choose the URL of the background video to use for the TikTok video.
174
- 8. Check the "Non-english" checkbox if you want to generate a video in a non-english language.
175
- 9. Check the "Upload to TikTok" checkbox if you want to upload the video to TikTok using the TikTok session cookie. For this step it is required to have a TikTok account and to be logged in on your browser. Then the required cookies.txt file can be generated using this guide
176
  """)
177
 
178
  LEFT, RIGHT = st.columns(2)
@@ -233,9 +211,7 @@ async def main():
233
 
234
  st.subheader("Video settings")
235
 
236
- st.write("JSON file with the videos")
237
- with open('video.json', encoding='utf-8') as fh:
238
- video_json = st.json(json.load(fh), expanded=False)
239
 
240
  # Get the list of files in "background"
241
  folder_path = Path("background").absolute()
@@ -244,26 +220,26 @@ async def main():
244
 
245
  # Create a Dropdown with the list of files
246
  background_tab = st.selectbox(
247
- "Your Backgrounds", files, index=0, help="The background video to use for the TikTok video")
248
 
249
- # Choose which video to generate
250
- videos = json.load(open("video.json"))
 
251
 
252
- video_num = st.multiselect(
253
- "Video",
254
- options=videos,
255
- format_func=lambda video: f"{video['series']} - {video['part']}",
256
- default=[videos[0]],
257
- help="The video to generate. If you want to generate multiple videos, select them as a multiselect."
258
- )
259
 
260
- if st.button("Generate Video"):
261
  if not video_num:
262
- st.error("You must select at least one video to generate")
263
  return
264
  global result
265
  result = await generate_video(model, tts_voice, sub_position, font, font_color, font_size,
266
- url, non_english, upload_tiktok, verbose, videos, background_tab, video_num, max_words)
267
 
268
  with RIGHT:
269
  if result:
 
 
1
  import sys
2
+ import csv
3
  from pathlib import Path
4
  import asyncio
5
  import platform
 
9
  import streamlit as st
10
  import pandas as pd
11
 
12
+ from src.video_creator import ClipMaker
13
  from utils import rgb_to_bgr
14
 
15
  result = None
16
 
 
17
  async def generate_video(
18
  model,
19
  tts_voice,
 
25
  non_english,
26
  upload_tiktok,
27
  verbose,
 
 
28
  video_num,
29
+ background_tab,
30
  max_words,
31
  *args,
32
  **kwargs):
 
46
  max_words=max_words
47
  )
48
 
49
+ async def get_clip(clip, args):
50
  with st.status("Generating video...", expanded=False) as status:
51
+ video_creator = ClipMaker(clip=clip, args=args)
52
 
53
  status.update(label="Downloading video...")
54
+ video_creator.download_background_video()
55
 
56
  status.update(label="Loading model...")
57
  video_creator.load_model()
58
 
59
  status.update(label="Creating text...")
60
+ video_creator.merge_clip_text()
61
 
62
  status.update(label="Generating audio...")
63
  await video_creator.text_to_speech()
 
70
 
71
  status.update(label="Integrating subtitles...")
72
  video_creator.integrate_subtitles()
73
+ print('HERE x3')
74
 
75
  if upload_tiktok:
76
  status.update(label="Uploading to TikTok...")
 
80
  state="complete", expanded=False)
81
  return str(video_creator.mp4_final_video)
82
 
83
+ task = [get_clip(clip, args) for clip in video_num]
84
+ result = await asyncio.gather(*task)
 
 
 
 
85
 
86
+ if len(result) == 1:
87
+ return result[0]
88
  else:
89
+ return result[-1] # Return the last video generated if multiple videos are generated
 
90
 
91
  @st.cache_data
92
+ def csv_to_df(csv_file):
93
+ return pd.read_csv(csv_file, sep='|', encoding='utf-8')
94
 
95
 
96
  @st.cache_data
97
+ def df_to_csv(df):
98
+ # Save the edited dataframe to the CSV file
99
+ df.to_csv("clips.csv", index=False, sep='|')
100
+ return df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
 
103
  # Streamlit Config
 
127
  st.title("🏆 Whisper-TikTok 🚀")
128
  st.write("Create a TikTok video with text-to-speech of Microsoft Edge's TTS and subtitles of Whisper model.")
129
 
130
+ st.subheader("Clip Editor", help="Here you can edit the CSV file with the clips data. Copy-and-paste is supported and compatible with Google Sheets, Excel, and others. You can do bulk-editing by dragging the handle on a cell (similar to Excel)!")
131
+ st.write("ℹ️ The CSV file is saved automatically when you click the button below. Every time you edit the CSV file, you must click the button to save the changes otherwise they will be lost.")
132
+ edited_df = st.data_editor(csv_to_df("clips.csv"),
133
  num_rows="dynamic")
134
+ st.button("Save CSV", on_click=df_to_csv, args=(
135
+ edited_df,), help="Save the CSV file with the clips")
136
 
137
  st.divider()
138
 
 
143
  with st.expander("ℹ️ How to use"):
144
  st.write(
145
  """
146
+ 1. Choose the clip to generate using the dropdown menu.
147
  2. Choose the model to use for the subtitles.
148
  3. Choose the voice to use for the text-to-speech.
149
+ 4. Choose the background video to use for the clip.
150
  5. Choose the position of the subtitles.
151
  6. Choose the font, font color, and font size for the subtitles.
152
+ 7. Check the "Non-english" checkbox if you want to generate a clip in a non-english language.
153
+ 8. Check the "Upload to TikTok" checkbox if you want to upload the clip to TikTok using the TikTok session cookie. For this step it is required to have a TikTok account and to be logged in on your browser. Then the required cookies.txt file can be generated using the guide specified in the README. The cookies.txt file must be placed in the root folder of the project.
 
154
  """)
155
 
156
  LEFT, RIGHT = st.columns(2)
 
211
 
212
  st.subheader("Video settings")
213
 
214
+ st.write("CSV file with the clips")
 
 
215
 
216
  # Get the list of files in "background"
217
  folder_path = Path("background").absolute()
 
220
 
221
  # Create a Dropdown with the list of files
222
  background_tab = st.selectbox(
223
+ "Your Backgrounds", files, index=0, help="The background video to use for the clip")
224
 
225
+ # Choose which clip to generate the video for
226
+ with open('clips.csv', 'r', encoding='utf-8') as csvfile:
227
+ clips = csv.DictReader(csvfile, delimiter='|')
228
 
229
+ video_num = st.multiselect(
230
+ "Video",
231
+ options=clips,
232
+ format_func=lambda video: f"{video['series']} - {video['part']}",
233
+ help="The clip to generate. If you want to generate multiple clips, select them as a multiselect."
234
+ )
 
235
 
236
+ if st.button("Generate Clip"):
237
  if not video_num:
238
+ st.error("You must select at least one clip to generate")
239
  return
240
  global result
241
  result = await generate_video(model, tts_voice, sub_position, font, font_color, font_size,
242
+ url, non_english, upload_tiktok, verbose, video_num, background_tab, max_words)
243
 
244
  with RIGHT:
245
  if result:
clips.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ series|part|text|tags|outro
2
+ Crazy facts that you did not know|4|The first person to survive going over Niagara Falls in a barrel was a 63-year-old school teacher|[survive, Niagara Falls, facts]|I hope you enjoyed this video. If you did, please give it a thumbs up and subscribe to my channel. I will see you in the next video.
3
+ Crazy facts that you did not know|6|Did you know that the shortest war in history lasted only 38 minutes? It was between Britain and Zanzibar in 1896|[shortest war, history, 38 minutes, Britain, Zanzibar]|I hope you enjoyed this video. If you did, please give it a thumbs up and subscribe to my channel. I will see you in the next video.
example-reddit-post.txt DELETED
@@ -1,2 +0,0 @@
1
- https://www.reddit.com/r/stories/comments/1afvvvu/my_fiance_had_a_special_name_for_me_when_he_first/
2
- https://www.reddit.com/r/stories/comments/1afzk7v/i_think_my_mom_cheated/
 
 
 
example-video.json DELETED
@@ -1 +0,0 @@
1
- [{"series": "Geständnis eines pensionierten FBI-Agenten", "part": "", "outro": "", "text": "Ich habe früher in einer medizinischen Abteilung eines Krankenhauses im Nordwesten gearbeitet. Die Patienten wurden endoskopischen Untersuchungen unterzogen, für die sie sediert wurden. Bei diesem Fall hatten wir einen älteren Herrn als Patienten, während wir auf den Arzt warteten, unterhielten wir uns freundlich, nichts Ungewöhnliches. Als der Arzt herein kam, fragte er uns sofort, ob der Patient uns von seiner Vergangenheit erzählt hätte. Ich begann damit, die medizinischen Gründe aufzuzählen, warum er das Verfahren hatte, aber der Arzt unterbrach mich und sagte: \"Nein, seine bisherige Arbeit als verdeckter FBI-Mob-Agent.\" Natürlich war meine Reaktion: \"Ohhhhh nein... das muss interessant gewesen sein, mit diesen Mafiagangstern umzugehen.\" Die Antwort des Patienten hat mich bis heute verfolgt. Seine genauen Worte waren:.\"Diese Jungs waren nicht so schlimm wie diese verdammten Politiker.\".Dann erzählte er eine Geschichte davon, wie er Leiter der Sicherheit bei einer \"Kongressveranstaltung\" in den 80er Jahren war. Er sagte, ein anderer Agent habe ihm das Telefon gereicht und gesagt, \"ein Kongressabgeordneter\" wolle mit ihm sprechen..Das Gespräch verlief folgendermaßen:.Kongressabgeordneter: Gibt es dort Frauen?.Agent: Ich weiß nicht, was Sie meinen... aber ja, es sind Frauen hier..Kongressabgeordneter: Wenn ich ankomme, möchte ich eine in mein Zimmer geschickt bekommen... \"NICHT ÄLTER ALS 13\"..Direkt nachdem er das gesagt hatte, gab der Anästhesist ihm die Medikamente, und er verlor das Bewusstsein. Wir standen alle einfach da in Stille, während er das Verfahren durchlief... was zum Teufel hat er uns gerade erzählt?.EDIT: Offenbar bleiben Leute an der Verwendung des Begriffs \"Informant\" hängen... also habe ich ihn entfernt. Für Klarheit, die ich für selbstverständlich hielt.. dieser alte Mann war kein \"Mafia-Informant\" im Sinne eines Mob-Spitzels. Er war ein pensionierter FBI-Agent, der undercover mit der Mafia gearbeitet hat. Zu einer völlig anderen Zeit behauptete er, von einem Kongressabgeordneten nach eine-jährigen Mädchen gefragt worden zu sein. Dieser Mann hatte keinen Grund zu lügen... er hat es nicht einmal erwähnt, der Chirurg hat es..EDIT 2:.Die Geschichte ist zu 100% wahr, so wie sie passiert ist..Könnte der alte Mann lügen, ja natürlich. Glaube ich, dass er gelogen hat, NEIN!.Die Untersuchung war eine EGD \"obere Endoskopie\"..Der Eingriff wurde von einem Chirurgen zur postoperativen Überwachung durchgeführt, nicht von einem Gastroenterologen..Ihr Nörgler müsst verstehen, dass es elitäre Pädophilenringe gibt und sie schon seit langer Zeit existieren..Diejenigen, die dies als eine Unterstützung für eine politische Weltanschauung betrachten, liegen falsch. Ich habe keine politische Zugehörigkeit..Für all diejenigen, die behaupten \"Das FBI ist nicht für Sicherheit verantwortlich und arbeitet nicht mit dem Kongress zusammen.\" Ihr nehmt an, dass ihr wisst, in welcher Funktion dieser Mann während seiner gesamten Karriere gearbeitet hat??.https://de.wikipedia.org/wiki/FBI_Police.\"Aufgaben und Verantwortlichkeiten\".\"Die FBI-Polizei kann gelegentlich bei bedeutenden nationalen Sicherheitsveranstaltungen eingesetzt werden, wie Präsidenteneinführungen, dem Super Bowl, Konferenzen von weltweiten Führern sowie großen politischen Parteikonferenzen.\""}]
 
 
example.env DELETED
@@ -1,7 +0,0 @@
1
- REDDIT_USER_AGENT=
2
- REDDIT_CLIENT_ID=
3
- REDDIT_CLIENT_SECRET=
4
-
5
- DEEPL_AUTH_KEY=
6
-
7
- OPENAI_API_KEY=
 
 
 
 
 
 
 
 
main.py CHANGED
@@ -1,6 +1,6 @@
1
  # utils.py
2
  import asyncio
3
- import json
4
  import platform
5
  from dotenv import find_dotenv, load_dotenv
6
  from utils import *
@@ -8,24 +8,18 @@ from utils import *
8
  # msg.py
9
  import msg
10
 
11
- # logger.py
12
- from src.logger import setup_logger
13
-
14
  # arg_parser.py
15
  from src.arg_parser import parse_args
16
 
17
  # video_creator.py
18
- from src.video_creator import VideoCreator
19
 
20
  # Default directory
21
  HOME = Path.cwd()
22
 
23
- # Logging
24
- logger = setup_logger()
25
-
26
- # JSON video file
27
- video_json_path = HOME / 'video.json'
28
- jsonData = json.loads(video_json_path.read_text(encoding='utf-8'))
29
 
30
 
31
  #######################
@@ -33,25 +27,23 @@ jsonData = json.loads(video_json_path.read_text(encoding='utf-8'))
33
  #######################
34
 
35
 
36
- async def main() -> bool:
37
  console.clear() # Clear terminal
38
 
39
  args = await parse_args()
40
- videos = jsonData
41
 
42
- for video in videos:
43
- logger.debug('Creating video')
44
  with console.status(msg.STATUS) as status:
45
- load_dotenv(find_dotenv()) # Optional
46
 
47
- console.log(
48
- f"{msg.OK}Finish loading environment variables")
49
- logger.info('Finish loading environment variables')
 
50
 
51
- video_creator = VideoCreator(video, args)
52
- video_creator.download_video()
53
  video_creator.load_model()
54
- video_creator.create_text()
55
  await video_creator.text_to_speech()
56
  video_creator.generate_transcription()
57
  video_creator.select_background()
@@ -60,7 +52,7 @@ async def main() -> bool:
60
  video_creator.upload_to_tiktok()
61
 
62
  console.log(f'{msg.DONE} {str(video_creator.mp4_final_video)}')
63
- return 0
64
 
65
 
66
  if __name__ == "__main__":
@@ -70,7 +62,7 @@ if __name__ == "__main__":
70
 
71
  loop = asyncio.get_event_loop()
72
 
73
- loop.run_until_complete(main())
74
 
75
  loop.close()
76
 
 
1
  # utils.py
2
  import asyncio
3
+ import csv
4
  import platform
5
  from dotenv import find_dotenv, load_dotenv
6
  from utils import *
 
8
  # msg.py
9
  import msg
10
 
 
 
 
11
  # arg_parser.py
12
  from src.arg_parser import parse_args
13
 
14
  # video_creator.py
15
+ from src.video_creator import ClipMaker
16
 
17
  # Default directory
18
  HOME = Path.cwd()
19
 
20
+ # List of clips to generate
21
+ video_csv = HOME / 'clips.csv'
22
+ video_data = csv.DictReader(open(video_csv, 'r', encoding='utf-8'), delimiter='|')
 
 
 
23
 
24
 
25
  #######################
 
27
  #######################
28
 
29
 
30
+ async def main(video_list) -> bool:
31
  console.clear() # Clear terminal
32
 
33
  args = await parse_args()
 
34
 
35
+ for video in video_list:
 
36
  with console.status(msg.STATUS) as status:
 
37
 
38
+ # Load env vars (if any)
39
+ load_dotenv(find_dotenv())
40
+
41
+ console.log(f"{msg.OK}Finish loading environment variables")
42
 
43
+ video_creator = ClipMaker(video, args)
44
+ video_creator.download_background_video()
45
  video_creator.load_model()
46
+ video_creator.merge_clip_text()
47
  await video_creator.text_to_speech()
48
  video_creator.generate_transcription()
49
  video_creator.select_background()
 
52
  video_creator.upload_to_tiktok()
53
 
54
  console.log(f'{msg.DONE} {str(video_creator.mp4_final_video)}')
55
+ return True
56
 
57
 
58
  if __name__ == "__main__":
 
62
 
63
  loop = asyncio.get_event_loop()
64
 
65
+ loop.run_until_complete(main(video_list=video_data))
66
 
67
  loop.close()
68
 
packages.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ ffmpeg
pages/Reddit.py DELETED
@@ -1,119 +0,0 @@
1
- from pathlib import Path
2
- import random
3
- import streamlit as st
4
-
5
- import praw
6
-
7
- HOME = Path(__name__).parent.absolute()
8
-
9
-
10
- @st.cache_data
11
- def create_instance(*args, **kwargs):
12
- reddit = praw.Reddit(
13
- client_id=kwargs.get('client_id'),
14
- client_secret=kwargs.get('client_secret'),
15
- user_agent=kwargs.get('user_agent'),
16
- )
17
-
18
- subreddit = get_subreddit(reddit=reddit, subreddit=kwargs.get(
19
- 'subreddit'), nsfw=kwargs.get('nsfw'))
20
- submission = get_random_submission(subreddit=subreddit)
21
- st.session_state['submission'] = submission
22
- return True
23
-
24
-
25
- def get_subreddit(*args, **kwargs):
26
- reddit = kwargs.get('reddit')
27
- subreddit = reddit.subreddit(kwargs.get('subreddit'))
28
- nsfw = kwargs.get('nsfw')
29
- try:
30
- st.text(f"Subreddit: {subreddit.display_name}")
31
- except Exception as exception:
32
- st.exception(exception=exception)
33
-
34
- if subreddit.over18 and not nsfw:
35
- st.error(
36
- body='subreddit has NSFW contents but you did not select to scrape them')
37
- return subreddit
38
-
39
-
40
- def get_random_submission(*args, **kwargs):
41
- subreddit = kwargs.get('subreddit')
42
- submissions = [submission for submission in subreddit.hot(limit=10)]
43
- return random.choice(submissions)
44
-
45
-
46
- # Streamlit Config
47
- st.set_page_config(
48
- page_title="Whisper-TikTok",
49
- page_icon="💬",
50
- layout="wide",
51
- initial_sidebar_state="expanded",
52
- menu_items={
53
- 'Get Help': 'https://github.com/MatteoFasulo/Whisper-TikTok',
54
- 'Report a bug': "https://github.com/MatteoFasulo/Whisper-TikTok/issues",
55
- 'About':
56
- """
57
- # Whisper-TikTok
58
- Whisper-TikTok is an innovative AI-powered tool that leverages the prowess of Edge TTS, OpenAI-Whisper, and FFMPEG to craft captivating TikTok videos also with a web application interface!
59
-
60
- Mantainer: https://github.com/MatteoFasulo
61
-
62
- If you find a bug or if you just have questions about the project feel free to reach me at https://github.com/MatteoFasulo/Whisper-TikTok
63
- Any contribution to this project is welcome to improve the quality of work!
64
- """
65
- }
66
- )
67
-
68
- with st.sidebar:
69
- with st.expander("ℹ️ How to use"):
70
- st.write(
71
- """
72
- Before starting you will need to create a new [Reddit API App](https://www.reddit.com/prefs/apps) by selecting `script` (personal use).
73
- Then, after putting the App name, http://localhost as `reddit uri` and `about url`, you have just to insert those values in this dashboard to use the Reddit API for scraping any subreddit.
74
- """)
75
- client_id = st.text_input(label='Reddit Client ID')
76
- client_secret = st.text_input(
77
- label='Reddit Client Secret', type='password')
78
- user_agent = st.text_input(label='Reddit User Agent')
79
-
80
-
81
- st.title("🏆 Whisper-TikTok 🚀")
82
- st.subheader('Reddit section')
83
- st.write("""
84
- This section allows you to generate videos from subreddits.""")
85
-
86
- st.divider()
87
-
88
- LEFT, RIGHT = st.columns(2)
89
-
90
- with LEFT:
91
- num_videos = st.number_input(label='How many videos do you want to generate?',
92
- min_value=1, max_value=10, value=1, step=1)
93
-
94
- subreddit = st.text_input(
95
- label='What Subreddit do you want to use', placeholder='AskReddit')
96
-
97
- nsfw = st.checkbox(label='NSFW content?', value=False)
98
-
99
- max_chars = st.slider(label='Maximum number of characters per line',
100
- min_value=10, max_value=50, value=38, step=1)
101
-
102
- max_words = st.number_input(label='Maximum number of words per line', min_value=1,
103
- max_value=5, value=2, step=1)
104
-
105
- result = st.button('Get subreddit')
106
-
107
- with RIGHT:
108
- if result:
109
- create_instance(client_id=client_id, client_secret=client_secret,
110
- user_agent=user_agent, subreddit=subreddit, nsfw=nsfw)
111
- submission = st.session_state['submission']
112
- title = submission.title
113
- submission.comment_sort = "new"
114
- top_level_comments = list(submission.comments)
115
- max_comments = 10
116
- st.subheader(title)
117
- for comment in top_level_comments[:max_comments]:
118
- st.text(comment.body)
119
- st.divider()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
pages/__init__.py DELETED
File without changes
reddit-post.txt DELETED
File without changes
reddit2json.py DELETED
@@ -1,133 +0,0 @@
1
- import praw
2
- import requests
3
- import json
4
- import os
5
- import re
6
- from tqdm import tqdm
7
-
8
- from openai import OpenAI
9
- from dotenv import load_dotenv
10
- import argparse
11
-
12
-
13
- load_dotenv() # take environment variables from .env.
14
-
15
-
16
- # Parse command-line arguments
17
- parser = argparse.ArgumentParser(description='Process Reddit posts.')
18
- parser.add_argument('--method', type=str, default='chat', choices=['translate', 'chat'],
19
- help='Method to use for processing text. "translate" uses Deepl, "chat" uses GPT-3.5 Turbo.')
20
- parser.add_argument('--lang', type=str, default='EN',
21
- help='Target language for translation. Only used if method is "translate".')
22
- args = parser.parse_args()
23
-
24
-
25
- client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
26
-
27
-
28
- def chat_with_gpt3(prompt):
29
- completion = client.chat.completions.create(
30
- model="gpt-3.5-turbo",
31
- messages=[
32
- {"role": "system", "content": f"You are an assistant that meaningfully translates English Reddit post texts into Language:{args.lang} and optimizes them for text-to-speech. The following is a Reddit post that you should translate and optimize for text-to-speech"},
33
- {"role": "user", "content": prompt}
34
- ]
35
- )
36
- # Extract the content attribute
37
- return completion.choices[0].message.content
38
-
39
-
40
- def get_reddit_post(url):
41
- reddit = praw.Reddit(
42
- client_id=os.getenv("REDDIT_CLIENT_ID"),
43
- client_secret=os.getenv("REDDIT_CLIENT_SECRET"),
44
- user_agent=os.getenv("REDDIT_USER_AGENT"),
45
- )
46
- post = reddit.submission(url=url)
47
- return post.title, post.selftext
48
-
49
-
50
- def translate_to_german(text):
51
- # url = "https://api.deepl.com/v2/translate"
52
- url = "https://api-free.deepl.com/v2/translate"
53
- data = {
54
- "auth_key": os.getenv("DEEPL_AUTH_KEY"),
55
- "text": text,
56
- "target_lang": args.lang,
57
- }
58
- response = requests.post(url, data=data)
59
- response_json = response.json()
60
- return response_json['translations'][0]['text']
61
-
62
-
63
- def process_text(title, text):
64
- if args.method == 'translate':
65
- title = translate_to_german(title)
66
- text = translate_to_german(text)
67
- elif args.method == 'chat':
68
- title = chat_with_gpt3(
69
- f"Translate the following title into Language: {args.lang} and adjust it so that it is optimized for a lecture by a text-to-speech program. Also remove all parentheses such as (29m) or (M23) or (M25) etc. Also remove all edits from the Reddit post so only the pure text remains:" + "\n\n" + "title" + "\n\n" + "Revised title:")
70
- text = chat_with_gpt3(
71
- f"Translate the following text into Language: {args.lang} and adjust it so that it is optimized for a lecture by a text-to-speech program. Also remove all parentheses such as (29m) or (M23) or (M25) or (19) etc. Also remove all edits from the Reddit post so only the pure text remains. Break off the text at the most exciting point to keep the readers very curious:" + "\n\n" + "text" + "\n\n" + "Revised text:")
72
- return title, text
73
-
74
-
75
- def modify_json(title_text, part_text, outro_text, main_text):
76
- data = []
77
- for i in range(len(title_text)):
78
- data.append({
79
- "series": title_text[i],
80
- "part": part_text[i],
81
- "outro": outro_text[i],
82
- "text": main_text[i]
83
- })
84
-
85
- with open('./video.json', 'w', encoding='utf-8') as f:
86
- json.dump(data, f, ensure_ascii=False)
87
-
88
-
89
- def read_file_line_by_line(file_path):
90
- with open(file_path, 'r') as file:
91
- for line in file:
92
- yield line
93
-
94
-
95
- title_text = []
96
- main_text = []
97
-
98
- # Convert generator to list to get length
99
- lines = list(read_file_line_by_line('./reddit-post.txt'))
100
-
101
- for line in tqdm(lines, desc="Processing Reddit posts", unit="post"):
102
- title, text = get_reddit_post(line)
103
- title, text = process_text(title, text)
104
-
105
- title = title.replace('\n\n', '.') # replace '\n\n' with ' ' in title
106
- text = text.replace('\n\n', '.') # replace '\n\n' with ' ' in text
107
-
108
- title = title.replace('&#x200B', '') # replace , with '' in title
109
- text = text.replace('&#x200B', '') # replace , with '' in text
110
-
111
- # remove gender and age indications from title and text
112
- title = re.sub(r'\(?\d+\s*[mwMW]\)?', '', title)
113
- text = re.sub(r'\(?\d+\s*[mwMW]\)?', '', text)
114
-
115
- # remove gender and age indications where M/W is written before the number
116
- title = re.sub(r'\(?\s*[mwMW]\s*\d+\)?', '', title)
117
- text = re.sub(r'\(?\s*[mwMW]\s*\d+\)?', '', text)
118
-
119
- # remove characters not allowed in a Windows filename from title
120
- title = re.sub(r'[<>:"/\\|?*,]', '', title)
121
-
122
- text = text.replace('Edit:', '')
123
- text = text.replace('edit:', '')
124
-
125
- title_text.append(title)
126
- main_text.append(text)
127
-
128
- # Initialize part_text and outro_text after the loop
129
- part_text = [""] * len(title_text)
130
- outro_text = [""] * len(title_text)
131
-
132
-
133
- modify_json(title_text, part_text, outro_text, main_text)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -14,4 +14,4 @@ tiktok-uploader
14
  streamlit
15
  praw
16
  requests
17
- openai
 
14
  streamlit
15
  praw
16
  requests
17
+ openai
src/subtitle_creator.py CHANGED
@@ -3,16 +3,22 @@ from pathlib import Path
3
  import torch
4
 
5
 
6
- def srt_create(whisper_model, path: str, series: str, part: int, text: str, filename: str, **kwargs) -> bool:
 
 
 
7
  series = series.replace(' ', '_')
8
 
 
9
  srt_path = f"{path}{os.sep}{series}{os.sep}"
10
  srt_filename = f"{srt_path}{series}_{part}.srt"
11
  ass_filename = f"{srt_path}{series}_{part}.ass"
12
 
 
13
  absolute_srt_path = Path(srt_filename).absolute()
14
  absolute_ass_path = Path(ass_filename).absolute()
15
 
 
16
  word_dict = {
17
  'Fontname': kwargs.get('font', 'Arial'),
18
  'Alignment': kwargs.get('sub_position', 5),
@@ -25,13 +31,16 @@ def srt_create(whisper_model, path: str, series: str, part: int, text: str, file
25
  'MarginR': '0',
26
  }
27
 
 
28
  transcribe = whisper_model.transcribe(
29
  filename, regroup=True, fp16=torch.cuda.is_available())
30
 
 
31
  transcribe.split_by_gap(0.5).split_by_length(kwargs.get(
32
  'max_characters')).merge_by_gap(0.15, max_words=kwargs.get('max_words'))
33
 
34
  transcribe.to_srt_vtt(str(absolute_srt_path), word_level=True)
35
  transcribe.to_ass(str(absolute_ass_path), word_level=True,
36
  highlight_color=kwargs.get('font_color'), **word_dict)
 
37
  return ass_filename
 
3
  import torch
4
 
5
 
6
+ def srt_create(whisper_model, path: str, series: str, part: int, filename: str, **kwargs) -> str:
7
+ # Transcribe using Whisper model
8
+
9
+ # Replace whitespaces with underscores for series name
10
  series = series.replace(' ', '_')
11
 
12
+ # Retrieve the folder path of srt and ass files
13
  srt_path = f"{path}{os.sep}{series}{os.sep}"
14
  srt_filename = f"{srt_path}{series}_{part}.srt"
15
  ass_filename = f"{srt_path}{series}_{part}.ass"
16
 
17
+ # Get the absolute path
18
  absolute_srt_path = Path(srt_filename).absolute()
19
  absolute_ass_path = Path(ass_filename).absolute()
20
 
21
+ # Subtitle style dict
22
  word_dict = {
23
  'Fontname': kwargs.get('font', 'Arial'),
24
  'Alignment': kwargs.get('sub_position', 5),
 
31
  'MarginR': '0',
32
  }
33
 
34
+ # Transcribe the .mp3 file using Whisper
35
  transcribe = whisper_model.transcribe(
36
  filename, regroup=True, fp16=torch.cuda.is_available())
37
 
38
+ # Adjustments to the style
39
  transcribe.split_by_gap(0.5).split_by_length(kwargs.get(
40
  'max_characters')).merge_by_gap(0.15, max_words=kwargs.get('max_words'))
41
 
42
  transcribe.to_srt_vtt(str(absolute_srt_path), word_level=True)
43
  transcribe.to_ass(str(absolute_ass_path), word_level=True,
44
  highlight_color=kwargs.get('font_color'), **word_dict)
45
+
46
  return ass_filename
src/video_creator.py CHANGED
@@ -1,8 +1,8 @@
1
- import json
2
  from pathlib import Path
3
 
4
  import stable_whisper as whisper
5
- from .logger import setup_logger
 
6
  from .subtitle_creator import srt_create
7
  from .text_to_speech import tts
8
  from .tiktok import upload_tiktok
@@ -11,85 +11,107 @@ from .video_downloader import download_video as youtube_download
11
  from utils import *
12
 
13
  HOME = Path.cwd()
14
- logger = setup_logger()
15
  media_folder = HOME / 'media'
16
 
17
 
18
- class VideoCreator:
19
- def __init__(self, video, args):
 
20
  self.args = args
21
- self.video = video
22
 
23
- self.series = video.get('series', '')
24
- self.part = video.get('part', '')
25
- self.text = video.get('text', '')
26
- self.tags = video.get('tags', list())
27
- self.outro = video.get('outro', '')
 
 
 
28
  self.path = Path(media_folder).absolute()
29
 
30
- def download_video(self, folder='background'):
 
31
  youtube_download(url=self.args.url, folder=folder)
32
- console.log(
33
- f"{msg.OK}Video downloaded from {self.args.url} to {folder}")
34
- logger.info(f"Video downloaded from {self.args.url} to {folder}")
35
 
36
  def load_model(self):
 
37
  model = self.args.model
 
38
  if self.args.model != "large" and not self.args.non_english:
39
  model = self.args.model + ".en"
 
40
  whisper_model = whisper.load_model(model)
41
 
 
42
  self.model = whisper_model
43
  return whisper_model
44
 
45
- def create_text(self):
46
- req_text = f"{self.series} - Part {self.part}.\n{self.text}\n{self.outro}"
 
 
 
47
  series = self.series.replace(' ', '_')
48
  filename = f"{self.path}{os.sep}{series}{os.sep}{series}_{self.part}.mp3"
49
 
 
50
  Path(f"{self.path}{os.sep}{series}").mkdir(parents=True, exist_ok=True)
51
 
 
52
  self.req_text = req_text
53
  self.mp3_file = filename
54
  return req_text, filename
55
 
56
- async def text_to_speech(self):
 
57
  await tts(self.req_text, outfile=self.mp3_file, voice=self.args.tts, args=self.args)
 
58
 
59
- def generate_transcription(self):
 
60
  ass_filename = srt_create(self.model,
61
- self.path, self.series, self.part, self.text, self.mp3_file, **vars(self.args))
 
 
62
  ass_filename = Path(ass_filename).absolute()
63
 
 
64
  self.ass_file = ass_filename
65
  return ass_filename
66
 
67
- def select_background(self):
 
68
  try:
69
- # Background video selected with WebUI
70
- background_mp4 = self.args.mp4_background
71
-
72
- with KeepDir() as keep_dir:
73
- keep_dir.chdir("background")
74
- background_mp4 = Path(background_mp4).absolute()
75
- except AttributeError:
76
- # CLI execution
77
- background_mp4 = random_background()
78
-
79
- background_mp4 = str(Path(background_mp4).absolute())
80
-
 
 
81
  self.mp4_background = background_mp4
82
  return background_mp4
83
 
84
- def integrate_subtitles(self):
85
- final_video = prepare_background(
86
- self.mp4_background, filename_mp3=self.mp3_file, filename_srt=self.ass_file, verbose=self.args.verbose)
87
  final_video = Path(final_video).absolute()
88
 
 
89
  self.mp4_final_video = final_video
90
  return final_video
91
 
92
- def upload_to_tiktok(self):
93
- uploaded = upload_tiktok(str(
94
- self.mp4_final_video), title=f"{self.series} - {self.part}", tags=self.tags, headless=not self.args.verbose)
95
  return uploaded
 
 
1
  from pathlib import Path
2
 
3
  import stable_whisper as whisper
4
+
5
+ # Local imports
6
  from .subtitle_creator import srt_create
7
  from .text_to_speech import tts
8
  from .tiktok import upload_tiktok
 
11
  from utils import *
12
 
13
  HOME = Path.cwd()
 
14
  media_folder = HOME / 'media'
15
 
16
 
17
+ class ClipMaker:
18
+ def __init__(self, clip: dict, args):
19
+ self.clip = clip
20
  self.args = args
 
21
 
22
+ # Fetch clip data or set default values
23
+ self.series = clip.get('series', 'Crazy facts that you did not know')
24
+ self.part = clip.get('part', '1')
25
+ self.text = clip.get('text', 'The first person to survive going over Niagara Falls in a barrel was a 63-year-old school teacher')
26
+ self.tags = clip.get('tags', ['survive', 'Niagara Falls', 'facts'])
27
+ self.outro = clip.get('outro', 'I hope you enjoyed this video. If you did, please give it a thumbs up and subscribe to my channel. I will see you in the next video.')
28
+
29
+ # Set media folder path
30
  self.path = Path(media_folder).absolute()
31
 
32
+ def download_background_video(self, folder='background') -> None:
33
+ # Download background video for the clip
34
  youtube_download(url=self.args.url, folder=folder)
35
+
36
+ console.log(f"{msg.OK}Video downloaded from {self.args.url} to {folder}")
37
+ return None
38
 
39
  def load_model(self):
40
+ # Load Whisper model
41
  model = self.args.model
42
+
43
  if self.args.model != "large" and not self.args.non_english:
44
  model = self.args.model + ".en"
45
+
46
  whisper_model = whisper.load_model(model)
47
 
48
+ # Set model to class attribute
49
  self.model = whisper_model
50
  return whisper_model
51
 
52
+ def merge_clip_text(self) -> tuple:
53
+ # Merge clip series, part, text and outro to create a single text for the clip
54
+ req_text = f"{self.series} - Part {self.part}.\n{self.text}\n{self.outro}" # TODO: allow user to customize this
55
+
56
+ # Remove whitespaces from series name and create a folder for the series
57
  series = self.series.replace(' ', '_')
58
  filename = f"{self.path}{os.sep}{series}{os.sep}{series}_{self.part}.mp3"
59
 
60
+ # Create series folder if it does not exist
61
  Path(f"{self.path}{os.sep}{series}").mkdir(parents=True, exist_ok=True)
62
 
63
+ # Set class attributes for text and mp3 (audio) file
64
  self.req_text = req_text
65
  self.mp3_file = filename
66
  return req_text, filename
67
 
68
+ async def text_to_speech(self) -> None:
69
+ # Convert text to speech using the selected TTS voice
70
  await tts(self.req_text, outfile=self.mp3_file, voice=self.args.tts, args=self.args)
71
+ return None
72
 
73
+ def generate_transcription(self) -> Path:
74
+ # Generate subtitles for the clip using the Whisper model
75
  ass_filename = srt_create(self.model,
76
+ self.path, self.series, self.part, self.mp3_file, **vars(self.args))
77
+
78
+ # Get the absolute path of .ass file
79
  ass_filename = Path(ass_filename).absolute()
80
 
81
+ # Set class attribute for .ass style file of subtitles
82
  self.ass_file = ass_filename
83
  return ass_filename
84
 
85
+ def select_background(self, random: bool = True) -> Path:
86
+ # Select which background video to use for the clip
87
  try:
88
+ # Background video selected with WebUI for Streamlit
89
+ # Add to the path the parent folder (background)
90
+ background_file = self.args.mp4_background
91
+ background_mp4 = Path(HOME / 'background' / background_file) # Concat path
92
+ background_mp4 = background_mp4.absolute()
93
+
94
+ except AttributeError: # Local CLI execution
95
+ if random:
96
+ background_mp4 = random_background()
97
+
98
+ else: # TODO: allow the user to select which background video to use
99
+ pass
100
+
101
+ # Set class attribute for mp4 background file
102
  self.mp4_background = background_mp4
103
  return background_mp4
104
 
105
+ def integrate_subtitles(self) -> Path:
106
+ # Use FFMPEG to integrate subtitles into background video and trim everything with fixed length of the audio file
107
+ final_video = prepare_background(str(self.mp4_background), filename_mp3=self.mp3_file, filename_srt=str(self.ass_file), verbose=self.args.verbose)
108
  final_video = Path(final_video).absolute()
109
 
110
+ # Set class attribute for mp4 final clip file
111
  self.mp4_final_video = final_video
112
  return final_video
113
 
114
+ def upload_to_tiktok(self) -> bool: # TODO: check if still working with Cookie
115
+ # Automatic upload on TikTok
116
+ uploaded = upload_tiktok(str(self.mp4_final_video), title=f"{self.series} - {self.part}", tags=self.tags, headless=not self.args.verbose)
117
  return uploaded
src/video_downloader.py CHANGED
@@ -1,13 +1,10 @@
1
  import subprocess
2
  from pathlib import Path
3
 
4
- import msg
5
- from utils import KeepDir
6
-
7
  HOME = Path.cwd()
8
 
9
 
10
- def download_video(url: str, folder: str = 'background'):
11
  """
12
  Downloads a video from the given URL and saves it to the specified folder.
13
 
@@ -19,7 +16,5 @@ def download_video(url: str, folder: str = 'background'):
19
  if not directory.exists():
20
  directory.mkdir()
21
 
22
- with KeepDir() as keep_dir:
23
- keep_dir.chdir(folder)
24
- subprocess.run(['yt-dlp', '-f bestvideo[ext=mp4]',
25
- '--restrict-filenames', url], check=True)
 
1
  import subprocess
2
  from pathlib import Path
3
 
 
 
 
4
  HOME = Path.cwd()
5
 
6
 
7
+ def download_video(url: str, folder: str = 'background') -> None:
8
  """
9
  Downloads a video from the given URL and saves it to the specified folder.
10
 
 
16
  if not directory.exists():
17
  directory.mkdir()
18
 
19
+ subprocess.run(['yt-dlp', '-f bestvideo[ext=mp4]', '--restrict-filenames', '--windows-filenames', f'-P {directory}', url], check=True)
20
+ return None
 
 
src/video_prepare.py CHANGED
@@ -9,31 +9,49 @@ HOME = Path.cwd()
9
 
10
 
11
  def prepare_background(background_mp4: str, filename_mp3: str, filename_srt: str, verbose: bool = False) -> str:
 
 
 
 
 
 
 
12
  video_info = get_info(background_mp4, kind='video')
13
  video_duration = int(round(video_info.get('duration'), 0))
14
 
15
  audio_info = get_info(filename_mp3, kind='audio')
16
  audio_duration = int(round(audio_info.get('duration'), 0))
17
 
 
18
  ss = random.randint(0, (video_duration-audio_duration))
 
 
19
  audio_duration = convert_time(audio_duration)
20
  if ss < 0:
21
  ss = 0
22
 
23
- srt_raw = filename_srt
24
- srt_filename = filename_srt.name
25
- srt_path = filename_srt.parent.absolute()
26
-
27
  directory = HOME / 'output'
28
  if not directory.exists():
29
  directory.mkdir()
30
 
 
31
  outfile = f"{HOME}{os.sep}output{os.sep}output_{ss}.mp4"
32
 
33
  if verbose:
34
  rich_print(
35
  f"{filename_srt = }\n{background_mp4 = }\n{filename_mp3 = }\n", style='bold green')
36
 
 
 
 
 
 
 
 
 
 
 
37
  args = [
38
  "ffmpeg",
39
  "-ss", str(ss),
@@ -42,7 +60,7 @@ def prepare_background(background_mp4: str, filename_mp3: str, filename_srt: str
42
  "-i", filename_mp3,
43
  "-map", "0:v",
44
  "-map", "1:a",
45
- "-vf", f"crop=ih/16*9:ih, scale=w=1080:h=1920:flags=lanczos, gblur=sigma=2, ass='{srt_raw.absolute()}'",
46
  "-c:v", "libx264",
47
  "-crf", "23",
48
  "-c:a", "aac",
@@ -55,8 +73,10 @@ def prepare_background(background_mp4: str, filename_mp3: str, filename_srt: str
55
  if verbose:
56
  rich_print('[i] FFMPEG Command:\n'+' '.join(args)+'\n', style='yellow')
57
 
58
- with KeepDir() as keep_dir:
59
- keep_dir.chdir(srt_path)
60
- subprocess.run(args, check=True)
 
 
61
 
62
  return outfile
 
9
 
10
 
11
  def prepare_background(background_mp4: str, filename_mp3: str, filename_srt: str, verbose: bool = False) -> str:
12
+
13
+ # Check if the input files are strings
14
+ assert isinstance(background_mp4, str)
15
+ assert isinstance(filename_srt, str)
16
+ assert isinstance(filename_mp3, str)
17
+
18
+ # Get the duration of the video and audio files
19
  video_info = get_info(background_mp4, kind='video')
20
  video_duration = int(round(video_info.get('duration'), 0))
21
 
22
  audio_info = get_info(filename_mp3, kind='audio')
23
  audio_duration = int(round(audio_info.get('duration'), 0))
24
 
25
+ # Randomly select a start time for the audio file
26
  ss = random.randint(0, (video_duration-audio_duration))
27
+
28
+ # Convert the time to HH:MM:SS format
29
  audio_duration = convert_time(audio_duration)
30
  if ss < 0:
31
  ss = 0
32
 
33
+ # Create the output directory if it does not exist
 
 
 
34
  directory = HOME / 'output'
35
  if not directory.exists():
36
  directory.mkdir()
37
 
38
+ # Set the output file path
39
  outfile = f"{HOME}{os.sep}output{os.sep}output_{ss}.mp4"
40
 
41
  if verbose:
42
  rich_print(
43
  f"{filename_srt = }\n{background_mp4 = }\n{filename_mp3 = }\n", style='bold green')
44
 
45
+ # Switch inside the subtitle file directory
46
+ old_dir = os.getcwd()
47
+ os.chdir(Path(filename_srt).parent)
48
+
49
+ # Extract only the filename from the path
50
+ # This is to avoid any issues with the path (see https://stackoverflow.com/questions/71597897/unable-to-parse-option-value-xxx-srt-as-image-size-in-ffmpeg)
51
+ # First we switch inside the directory of the subtitle file and then we execute the FFMPEG command with the filename only (not the full path)
52
+ filename_srt_name = Path(filename_srt).name
53
+
54
+ # FFMPEG Command
55
  args = [
56
  "ffmpeg",
57
  "-ss", str(ss),
 
60
  "-i", filename_mp3,
61
  "-map", "0:v",
62
  "-map", "1:a",
63
+ "-vf", f"crop=ih/16*9:ih, scale=w=1080:h=1920:flags=lanczos, gblur=sigma=2, ass='{filename_srt_name}'",
64
  "-c:v", "libx264",
65
  "-crf", "23",
66
  "-c:a", "aac",
 
73
  if verbose:
74
  rich_print('[i] FFMPEG Command:\n'+' '.join(args)+'\n', style='yellow')
75
 
76
+ # Execute the FFMPEG command
77
+ subprocess.run(args, check=True)
78
+
79
+ # Go back to old dir
80
+ os.chdir(old_dir)
81
 
82
  return outfile
utils.py CHANGED
@@ -1,4 +1,3 @@
1
- import datetime
2
  import os
3
  from pathlib import Path
4
  import random
@@ -8,32 +7,15 @@ import ffmpeg
8
  from rich.console import Console
9
 
10
  import msg
11
- from src.logger import setup_logger
12
-
13
 
14
  console = Console()
15
- logger = setup_logger()
16
-
17
-
18
- class KeepDir:
19
- def __init__(self):
20
- self.original_dir = os.getcwd()
21
-
22
- def __enter__(self):
23
- return self
24
-
25
- def __exit__(self, exc_type, exc_val, exc_tb):
26
- os.chdir(self.original_dir)
27
-
28
- def chdir(self, path):
29
- os.chdir(path)
30
 
31
 
32
  def rich_print(text, style: str = ""):
33
  console.print(text, style=style)
34
 
35
 
36
- def random_background(folder: str = "background") -> str:
37
  """
38
  Returns the filename of a random file in the specified folder.
39
 
@@ -41,18 +23,21 @@ def random_background(folder: str = "background") -> str:
41
  folder(str): The folder containing the files.
42
 
43
  Returns:
44
- str: The filename of a randomly selected file in the folder.
45
  """
 
46
  directory = Path(folder).absolute()
 
 
47
  if not directory.exists():
48
  directory.mkdir()
49
 
50
- with KeepDir() as keep_dir:
51
- keep_dir.chdir(folder)
52
- files = os.listdir(".")
53
- random_file = random.choice(files)
54
- return Path(random_file).absolute()
55
 
 
 
 
56
 
57
  def get_info(filename: str, kind: str):
58
  global probe
@@ -61,7 +46,6 @@ def get_info(filename: str, kind: str):
61
  probe = ffmpeg.probe(filename)
62
  except ffmpeg.Error as e:
63
  console.log(f"{msg.ERROR}{e.stderr}")
64
- logger.exception(e.stderr)
65
  sys.exit(1)
66
 
67
  if kind == 'video':
 
 
1
  import os
2
  from pathlib import Path
3
  import random
 
7
  from rich.console import Console
8
 
9
  import msg
 
 
10
 
11
  console = Console()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
 
14
  def rich_print(text, style: str = ""):
15
  console.print(text, style=style)
16
 
17
 
18
+ def random_background(folder: str = "background") -> Path:
19
  """
20
  Returns the filename of a random file in the specified folder.
21
 
 
23
  folder(str): The folder containing the files.
24
 
25
  Returns:
26
+ Path: The absolute path of the random file.
27
  """
28
+ # Get the absolute path of the folder
29
  directory = Path(folder).absolute()
30
+
31
+ # Create the folder if it does not exist
32
  if not directory.exists():
33
  directory.mkdir()
34
 
35
+ # Select a random background video for the clip inside the folder
36
+ random_file = random.choice(os.listdir(directory))
 
 
 
37
 
38
+ # Return the absolute path of the random file adding the folder path
39
+ # Concat the folder path with the random file name
40
+ return directory / random_file
41
 
42
  def get_info(filename: str, kind: str):
43
  global probe
 
46
  probe = ffmpeg.probe(filename)
47
  except ffmpeg.Error as e:
48
  console.log(f"{msg.ERROR}{e.stderr}")
 
49
  sys.exit(1)
50
 
51
  if kind == 'video':
video.json DELETED
@@ -1,19 +0,0 @@
1
- [
2
- {
3
- "series": "Crazy facts that you did not know",
4
- "part": "4",
5
- "outro": "Follow us for more",
6
- "text": "Sku",
7
- "tags": [
8
- "chess",
9
- "facts",
10
- "crazy"
11
- ]
12
- },
13
- {
14
- "series": "Crazy facts that you did not know",
15
- "part": "5",
16
- "outro": "Follow us for more",
17
- "text": "Test"
18
- }
19
- ]