CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=nE5iVtwKerA
OpenAI’s Whisper Learned 680,000 Hours Of Speech!
❤️ Check out Anyscale and try it for free here: https://www.anyscale.com/papers 📝 The paper "Robust Speech Recognition via Large-Scale Weak Supervision" is available here: https://openai.com/blog/whisper/ Try it out (note: the Scholarly Stampede appears to be in order - we barely published the video and there are already longer wait times): https://huggingface.co/spaces/openai/whisper Source code: https://github.com/openai/whisper Lex transcriptions by Andrej Karpathy: https://karpathy.ai/lexicap/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Chapters: 0:00 Teaser 0:25 More features 0:40 Speed talking transcription 1:00 Accent transcription 1:28 96 more languages! 1:50 What about other methods? 2:05 680,000 hours! 2:14 Is this any good? 3:20 As good as humans? 4:32 The ultimate test! 5:15 What is all this good for? 6:13 2 more good news 6:40 So simple! 6:55 More training data Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai
And dear fellow scholars, this is two minute papers with Dr. Karojol Ney Fahir. OpenAI's new Whisper AI is able to listen to what we say and transcribe it. Your voice goes in and this text comes out like this. This is incredible and it is going to change everything. As you see, when running through these few sentences, it works with flying colors. Well, stay tuned because you will see if we were able to break it later this video. And can it be as good as a human? We will test that too. But first, let's try to break it with this speed talking person. This is the micro-recement presenting the most midget miniature motorcade of micro-recement. Each one has dramatic details for a facial precision paint job. Plus incredible micro-recement pocket place. That's just a police station, fire station, restaurant, Wow, that's going to be hard. So, let's see the result. Wow, that is incredible. And that's not all it can do so much more. For instance, it does accents too. Here is an example. One of the most famous line marks on the board of the three holes. And the method is that metal and the midgetions spot one hole. So good. Now, when talking about accents, I am here too. And I will try my luck later in this video as well. The results were interesting to say the least. But wait, this knows not only English, but scientists at OpenAI said, let's throw in 96 other languages too. Here is French, for example. Whisper is a system of reconnaissance, automatic to the parole, entrenez sur six sans pedis. And as you see, it also translates it into English. So cool. Now, this is all well and good. But wait a second, transcription APIs already exist. For instance, here on YouTube, you can also request those for many videos. So, what is new here? Why publish this paper? Is this better? Also, what do we get for the 680,000 hours of training? Well, let's have a look. This better be good. Wow! What happened here? This is not a good start. For the first site, it seems that we are not getting a great deal out of this AI at all. Look, here between the 20 to 40 decibel signal to noise range, which means a good quality speed signal, it is the highest. So, is it the best AI around for transcription? Well, not quite. You see, what we are also looking at is the word error rate here, which is subject to minimization. That means the smaller, the better. We noted that 20 to 40 decibels is considered good quality signal. Here, it has a higher error rate than previous techniques. But wait, look at that. When going to 5 to 10 decibels and below, these signals are so bad that we can barely tell them from noise. For instance, imagine sitting in a really loud pub and here is where whisper really shines. Here, it is the best. And this is a good paper. So, we have plenty more data on how it compares to a bunch of previous techniques. Look, once again, we have the word error rate. This is subject to minimization. Lower is better. From A to D, you see other previous automatic speech recognition systems and it beats all of them. And what do we have here? Now, hold on to your papers because can that really be, is it as good as a human? That can't be, right? Well, the answer is yes, it can be as good as a human. Kind of. You see, it outperforms these professional human transcription services and is at the very least competitive with the best ones. An AI that transcribes as well as a professional human does. Wow, this truly feels like we are living in a science fiction movie. What a time to be alive. Humans, okay, it is as good as many humans. That's all right, but does this pass the ultimate test for a speech AI? What would that be? Of course, that is the carotest. That would be me speaking with the crazy accent. Let's see, dear fellow scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. And dear fellow scanners, I don't know what's going on here. It got my name perfectly, perhaps that is a sign of a super intelligence that is in the making. Wow, the capitalization of two-minute papers is all right too. Now, dear fellow scanners, let's try this again. And now that is what I expected to happen. The regular speech part is transcribed well, and it flabbed my name. So, no super intelligence yet, at least not reliably. So, what is all this good for? Well, imagine that you are looking at this amazing interview from Lex Friedman on super intelligence. And it is one and a half hours. Yes, that is very short for Lex. Now, we know that they talk about immortality, but where exactly? Well, that's not a problem anymore, look. Andre Carpethi ran Whisper on every single episode of Lex's podcast, and there we go. This is the relevant part about immortality. That is incredible. Of course, you fellow scanners know that YouTube also helps us with its own transcription feature, or we can also look at the chapter markers, however, not all video and audio is on YouTube. And here comes the kicker, Whisper works everywhere. How cool is that? And here comes the best part. Two amazing news. One, it is open source, and two, not only that, but you can try it now too. I put a link to both of these in the video description, but as always, please be patient. Whenever we link to something, you fellow scanners are so excited to try it out. We have crashed a bunch of webpages before. This is what we call the scholarly stampede. So I hear you asking, okay, but what is under the hood here? If you have a closer look at the paper, you see that it is using a simple algorithm, a transformer with a vast dataset, and it can get very, very forwarded. You see here that it makes great use of that 680,000 hours of human speech, and languages other than English, and translation improves a great deal if we add more, and even the English part improves a bit too. So this indicates that if we gave it even more data, it might improve it even more. And don't forget, it can deal with noisy data really well. So adding more might not be as big of a challenge, and it is already as good as many professional humans. Wow, I can only imagine what this will be able to do just a couple more papers down the line. What a time to be alive. This episode is brought to you by AnySkill, the company behind Ray, the fastest growing open source framework for scalable AI and scalable Python. Thousands of organizations use Ray, including open AI, Uber, Amazon, Spotify, Netflix, and more. Ray less developers iterate faster by providing common infrastructure for scaling data in just and pre-processing, machine learning training, deep learning, hyperparameter tuning, model serving, and more. All while integrating seamlessly with the rest of the machine learning ecosystem. AnySkill is a fully managed Ray platform that allows teams to bring products to market faster by eliminating the need to manage infrastructure and by enabling new AI capabilities. Ray and AnySkill can do recommendation systems time series forecasting, document understanding, image processing, industrial automation, and more. Go to anyscale.com slash papers and try it out today. Our thanks to AnySkill for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " And dear fellow scholars, this is two minute papers with Dr. Karojol Ney Fahir."}, {"start": 4.8, "end": 11.36, "text": " OpenAI's new Whisper AI is able to listen to what we say and transcribe it."}, {"start": 11.36, "end": 16.4, "text": " Your voice goes in and this text comes out like this."}, {"start": 16.4, "end": 20.88, "text": " This is incredible and it is going to change everything."}, {"start": 20.88, "end": 26.64, "text": " As you see, when running through these few sentences, it works with flying colors."}, {"start": 26.64, "end": 32.480000000000004, "text": " Well, stay tuned because you will see if we were able to break it later this video."}, {"start": 32.480000000000004, "end": 35.760000000000005, "text": " And can it be as good as a human?"}, {"start": 35.760000000000005, "end": 37.68, "text": " We will test that too."}, {"start": 37.68, "end": 42.08, "text": " But first, let's try to break it with this speed talking person."}, {"start": 42.08, "end": 45.120000000000005, "text": " This is the micro-recement presenting the most midget miniature motorcade of micro-recement."}, {"start": 45.120000000000005, "end": 47.120000000000005, "text": " Each one has dramatic details for a facial precision paint job."}, {"start": 47.120000000000005, "end": 48.480000000000004, "text": " Plus incredible micro-recement pocket place."}, {"start": 48.480000000000004, "end": 50.0, "text": " That's just a police station, fire station, restaurant,"}, {"start": 50.0, "end": 52.24, "text": " Wow, that's going to be hard."}, {"start": 52.24, "end": 54.0, "text": " So, let's see the result."}, {"start": 54.0, "end": 57.84, "text": " Wow, that is incredible."}, {"start": 57.84, "end": 61.44, "text": " And that's not all it can do so much more."}, {"start": 61.44, "end": 64.24, "text": " For instance, it does accents too."}, {"start": 64.24, "end": 65.52, "text": " Here is an example."}, {"start": 65.52, "end": 69.76, "text": " One of the most famous line marks on the board of the three holes."}, {"start": 69.76, "end": 72.8, "text": " And the method is that metal and the midgetions spot one hole."}, {"start": 72.8, "end": 74.72, "text": " So good."}, {"start": 74.72, "end": 78.4, "text": " Now, when talking about accents, I am here too."}, {"start": 78.4, "end": 82.16, "text": " And I will try my luck later in this video as well."}, {"start": 82.16, "end": 85.6, "text": " The results were interesting to say the least."}, {"start": 85.6, "end": 88.72, "text": " But wait, this knows not only English,"}, {"start": 88.72, "end": 91.52, "text": " but scientists at OpenAI said,"}, {"start": 91.52, "end": 95.6, "text": " let's throw in 96 other languages too."}, {"start": 95.6, "end": 97.75999999999999, "text": " Here is French, for example."}, {"start": 97.75999999999999, "end": 99.92, "text": " Whisper is a system of reconnaissance,"}, {"start": 99.92, "end": 101.52, "text": " automatic to the parole,"}, {"start": 101.52, "end": 102.96, "text": " entrenez sur six sans pedis."}, {"start": 102.96, "end": 107.03999999999999, "text": " And as you see, it also translates it into English."}, {"start": 107.03999999999999, "end": 108.08, "text": " So cool."}, {"start": 108.08, "end": 110.08, "text": " Now, this is all well and good."}, {"start": 110.08, "end": 114.56, "text": " But wait a second, transcription APIs already exist."}, {"start": 114.56, "end": 116.56, "text": " For instance, here on YouTube,"}, {"start": 116.56, "end": 119.75999999999999, "text": " you can also request those for many videos."}, {"start": 119.75999999999999, "end": 121.92, "text": " So, what is new here?"}, {"start": 121.92, "end": 124.16, "text": " Why publish this paper?"}, {"start": 124.16, "end": 125.75999999999999, "text": " Is this better?"}, {"start": 125.75999999999999, "end": 131.44, "text": " Also, what do we get for the 680,000 hours of training?"}, {"start": 131.44, "end": 133.2, "text": " Well, let's have a look."}, {"start": 133.2, "end": 134.16, "text": " This better be good."}, {"start": 135.04, "end": 136.0, "text": " Wow!"}, {"start": 136.0, "end": 137.52, "text": " What happened here?"}, {"start": 137.52, "end": 139.44, "text": " This is not a good start."}, {"start": 139.44, "end": 142.56, "text": " For the first site, it seems that we are not getting"}, {"start": 142.56, "end": 145.6, "text": " a great deal out of this AI at all."}, {"start": 145.6, "end": 150.24, "text": " Look, here between the 20 to 40 decibel signal to noise range,"}, {"start": 150.24, "end": 152.96, "text": " which means a good quality speed signal,"}, {"start": 152.96, "end": 154.64, "text": " it is the highest."}, {"start": 154.64, "end": 158.16, "text": " So, is it the best AI around for transcription?"}, {"start": 158.16, "end": 159.84, "text": " Well, not quite."}, {"start": 159.84, "end": 161.92, "text": " You see, what we are also looking at"}, {"start": 161.92, "end": 163.92, "text": " is the word error rate here,"}, {"start": 163.92, "end": 166.96, "text": " which is subject to minimization."}, {"start": 166.96, "end": 169.68, "text": " That means the smaller, the better."}, {"start": 169.68, "end": 172.48000000000002, "text": " We noted that 20 to 40 decibels"}, {"start": 172.48000000000002, "end": 175.12, "text": " is considered good quality signal."}, {"start": 175.12, "end": 179.28, "text": " Here, it has a higher error rate than previous techniques."}, {"start": 179.28, "end": 181.36, "text": " But wait, look at that."}, {"start": 181.36, "end": 185.20000000000002, "text": " When going to 5 to 10 decibels and below,"}, {"start": 185.20000000000002, "end": 189.36, "text": " these signals are so bad that we can barely tell them from noise."}, {"start": 189.36, "end": 193.04000000000002, "text": " For instance, imagine sitting in a really loud pub"}, {"start": 193.04000000000002, "end": 196.24, "text": " and here is where whisper really shines."}, {"start": 196.24, "end": 198.48000000000002, "text": " Here, it is the best."}, {"start": 198.48000000000002, "end": 200.64000000000001, "text": " And this is a good paper."}, {"start": 200.64000000000001, "end": 204.0, "text": " So, we have plenty more data on how it compares"}, {"start": 204.0, "end": 206.16, "text": " to a bunch of previous techniques."}, {"start": 206.16, "end": 209.52, "text": " Look, once again, we have the word error rate."}, {"start": 209.52, "end": 211.92000000000002, "text": " This is subject to minimization."}, {"start": 211.92000000000002, "end": 213.36, "text": " Lower is better."}, {"start": 213.36, "end": 218.16000000000003, "text": " From A to D, you see other previous automatic speech recognition"}, {"start": 218.16000000000003, "end": 221.20000000000002, "text": " systems and it beats all of them."}, {"start": 221.20000000000002, "end": 223.52, "text": " And what do we have here?"}, {"start": 223.52, "end": 228.08, "text": " Now, hold on to your papers because can that really be,"}, {"start": 228.08, "end": 230.76000000000002, "text": " is it as good as a human?"}, {"start": 230.76000000000002, "end": 232.52, "text": " That can't be, right?"}, {"start": 232.52, "end": 237.64000000000001, "text": " Well, the answer is yes, it can be as good as a human."}, {"start": 237.64000000000001, "end": 238.56, "text": " Kind of."}, {"start": 238.56, "end": 242.84, "text": " You see, it outperforms these professional human transcription"}, {"start": 242.84, "end": 246.24, "text": " services and is at the very least competitive"}, {"start": 246.24, "end": 247.96, "text": " with the best ones."}, {"start": 247.96, "end": 253.24, "text": " An AI that transcribes as well as a professional human does."}, {"start": 253.24, "end": 256.04, "text": " Wow, this truly feels like we are living"}, {"start": 256.04, "end": 258.0, "text": " in a science fiction movie."}, {"start": 258.0, "end": 259.92, "text": " What a time to be alive."}, {"start": 259.92, "end": 263.72, "text": " Humans, okay, it is as good as many humans."}, {"start": 263.72, "end": 267.84000000000003, "text": " That's all right, but does this pass the ultimate test"}, {"start": 267.84000000000003, "end": 269.84000000000003, "text": " for a speech AI?"}, {"start": 269.84000000000003, "end": 271.32, "text": " What would that be?"}, {"start": 271.32, "end": 274.24, "text": " Of course, that is the carotest."}, {"start": 274.24, "end": 277.72, "text": " That would be me speaking with the crazy accent."}, {"start": 277.72, "end": 280.08, "text": " Let's see, dear fellow scholars,"}, {"start": 280.08, "end": 283.28, "text": " this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 283.28, "end": 288.0, "text": " And dear fellow scanners, I don't know what's going on here."}, {"start": 288.0, "end": 291.91999999999996, "text": " It got my name perfectly, perhaps that is a sign"}, {"start": 291.91999999999996, "end": 295.32, "text": " of a super intelligence that is in the making."}, {"start": 295.32, "end": 299.91999999999996, "text": " Wow, the capitalization of two-minute papers is all right too."}, {"start": 299.91999999999996, "end": 303.64, "text": " Now, dear fellow scanners, let's try this again."}, {"start": 303.64, "end": 307.0, "text": " And now that is what I expected to happen."}, {"start": 307.0, "end": 309.96, "text": " The regular speech part is transcribed well,"}, {"start": 309.96, "end": 311.88, "text": " and it flabbed my name."}, {"start": 311.88, "end": 316.52, "text": " So, no super intelligence yet, at least not reliably."}, {"start": 316.52, "end": 319.2, "text": " So, what is all this good for?"}, {"start": 319.2, "end": 323.04, "text": " Well, imagine that you are looking at this amazing interview"}, {"start": 323.04, "end": 326.24, "text": " from Lex Friedman on super intelligence."}, {"start": 326.24, "end": 328.4, "text": " And it is one and a half hours."}, {"start": 328.4, "end": 331.16, "text": " Yes, that is very short for Lex."}, {"start": 331.16, "end": 334.52, "text": " Now, we know that they talk about immortality,"}, {"start": 334.52, "end": 336.68, "text": " but where exactly?"}, {"start": 336.68, "end": 339.56, "text": " Well, that's not a problem anymore, look."}, {"start": 339.56, "end": 343.6, "text": " Andre Carpethi ran Whisper on every single episode"}, {"start": 343.6, "end": 346.64, "text": " of Lex's podcast, and there we go."}, {"start": 346.64, "end": 350.12, "text": " This is the relevant part about immortality."}, {"start": 350.12, "end": 351.88, "text": " That is incredible."}, {"start": 351.88, "end": 355.84000000000003, "text": " Of course, you fellow scanners know that YouTube also helps us"}, {"start": 355.84000000000003, "end": 358.16, "text": " with its own transcription feature,"}, {"start": 358.16, "end": 361.04, "text": " or we can also look at the chapter markers,"}, {"start": 361.04, "end": 365.64, "text": " however, not all video and audio is on YouTube."}, {"start": 365.64, "end": 369.88, "text": " And here comes the kicker, Whisper works everywhere."}, {"start": 369.88, "end": 371.71999999999997, "text": " How cool is that?"}, {"start": 371.71999999999997, "end": 374.2, "text": " And here comes the best part."}, {"start": 374.2, "end": 376.03999999999996, "text": " Two amazing news."}, {"start": 376.03999999999996, "end": 380.28, "text": " One, it is open source, and two, not only that,"}, {"start": 380.28, "end": 382.8, "text": " but you can try it now too."}, {"start": 382.8, "end": 385.8, "text": " I put a link to both of these in the video description,"}, {"start": 385.8, "end": 388.59999999999997, "text": " but as always, please be patient."}, {"start": 388.59999999999997, "end": 390.47999999999996, "text": " Whenever we link to something,"}, {"start": 390.47999999999996, "end": 393.76, "text": " you fellow scanners are so excited to try it out."}, {"start": 393.76, "end": 396.56, "text": " We have crashed a bunch of webpages before."}, {"start": 396.56, "end": 399.71999999999997, "text": " This is what we call the scholarly stampede."}, {"start": 399.71999999999997, "end": 404.71999999999997, "text": " So I hear you asking, okay, but what is under the hood here?"}, {"start": 404.96, "end": 407.2, "text": " If you have a closer look at the paper,"}, {"start": 407.2, "end": 410.52, "text": " you see that it is using a simple algorithm,"}, {"start": 410.52, "end": 413.48, "text": " a transformer with a vast dataset,"}, {"start": 413.48, "end": 416.64, "text": " and it can get very, very forwarded."}, {"start": 416.64, "end": 419.32, "text": " You see here that it makes great use"}, {"start": 419.32, "end": 423.52, "text": " of that 680,000 hours of human speech,"}, {"start": 423.52, "end": 425.79999999999995, "text": " and languages other than English,"}, {"start": 425.79999999999995, "end": 429.68, "text": " and translation improves a great deal if we add more,"}, {"start": 429.68, "end": 433.35999999999996, "text": " and even the English part improves a bit too."}, {"start": 433.35999999999996, "end": 437.4, "text": " So this indicates that if we gave it even more data,"}, {"start": 437.4, "end": 439.59999999999997, "text": " it might improve it even more."}, {"start": 439.59999999999997, "end": 443.64, "text": " And don't forget, it can deal with noisy data really well."}, {"start": 443.64, "end": 447.64, "text": " So adding more might not be as big of a challenge,"}, {"start": 447.64, "end": 452.24, "text": " and it is already as good as many professional humans."}, {"start": 452.24, "end": 456.32, "text": " Wow, I can only imagine what this will be able to do"}, {"start": 456.32, "end": 458.84000000000003, "text": " just a couple more papers down the line."}, {"start": 458.84000000000003, "end": 460.76, "text": " What a time to be alive."}, {"start": 460.76, "end": 463.92, "text": " This episode is brought to you by AnySkill,"}, {"start": 463.92, "end": 465.6, "text": " the company behind Ray,"}, {"start": 465.6, "end": 468.36, "text": " the fastest growing open source framework"}, {"start": 468.36, "end": 472.2, "text": " for scalable AI and scalable Python."}, {"start": 472.2, "end": 474.64, "text": " Thousands of organizations use Ray,"}, {"start": 474.64, "end": 479.64, "text": " including open AI, Uber, Amazon, Spotify, Netflix,"}, {"start": 479.88, "end": 480.84000000000003, "text": " and more."}, {"start": 480.84, "end": 483.88, "text": " Ray less developers iterate faster"}, {"start": 483.88, "end": 486.08, "text": " by providing common infrastructure"}, {"start": 486.08, "end": 489.0, "text": " for scaling data in just and pre-processing,"}, {"start": 489.0, "end": 491.84, "text": " machine learning training, deep learning,"}, {"start": 491.84, "end": 495.76, "text": " hyperparameter tuning, model serving, and more."}, {"start": 495.76, "end": 498.52, "text": " All while integrating seamlessly"}, {"start": 498.52, "end": 501.47999999999996, "text": " with the rest of the machine learning ecosystem."}, {"start": 501.47999999999996, "end": 504.35999999999996, "text": " AnySkill is a fully managed Ray platform"}, {"start": 504.35999999999996, "end": 508.4, "text": " that allows teams to bring products to market faster"}, {"start": 508.4, "end": 511.79999999999995, "text": " by eliminating the need to manage infrastructure"}, {"start": 511.79999999999995, "end": 515.12, "text": " and by enabling new AI capabilities."}, {"start": 515.12, "end": 518.76, "text": " Ray and AnySkill can do recommendation systems"}, {"start": 518.76, "end": 522.24, "text": " time series forecasting, document understanding,"}, {"start": 522.24, "end": 526.16, "text": " image processing, industrial automation, and more."}, {"start": 526.16, "end": 531.16, "text": " Go to anyscale.com slash papers and try it out today."}, {"start": 531.28, "end": 535.3199999999999, "text": " Our thanks to AnySkill for helping us make better videos for you."}, {"start": 535.3199999999999, "end": 537.56, "text": " Thanks for watching and for your generous support,"}, {"start": 537.56, "end": 539.56, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eM5jn8vY2OQ
OpenAI's DALL-E 2 Has Insane Capabilities! 🤖
❤️ Check out Runway and try it for free here: https://runwayml.com/papers/ Use the code TWOMINUTE at checkout to get 10% off! 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ ☀️My free Master-level light transport course is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ 📝 Our Separable Subsurface Scattering paper with Activition-Blizzard: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 📝 Our earlier paper with the caustics: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ Reynante Martinez, the master's page: https://www.reynantemartinez.com/ Rendered images: LuxCore Render / Sharlybg https://luxcorerender.org/wp-content/uploads/2017/12/Salon22XS.jpg https://luxcorerender.org/wp-content/uploads/2017/12/SSDark_01b.jpg Hotel scene: Badblender - https://www.blendswap.com/blend/30669 Path tracing links on Shadertoy: https://www.shadertoy.com/view/tsBBWW https://www.shadertoy.com/view/MtfGR4 https://www.shadertoy.com/view/Ns2fzy Caustics: https://cgcookie.com/projects/luxrender-caustics https://twitter.com/djbaskin/status/1514735924826963981 Dispersion: https://wiki.luxcorerender.org/Glass_Material_IOR_and_Dispersion Chapters: 0:00 Teaser 0:48 Light Transport 1:18 Variant generation 1:48 Experiment 1 2:20 Let's try it again! 3:40 Experiment 2 5:05 Experiment 3 6:34 Experiment 4 7:40 Indirect Illumination, dispersion, course 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai #dalle
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jolna Ifehir. Finally, this is my Happy Episode. Well, of course, I am happy in every episode, but this is going to be my Happy Happy Episode, if you will. Why is that? Well, buckle up, because today we are going to use OpenAI's DOLI2 a text to image AI, and we will see what it is made of. Can it create beautiful light transport effects or not? We will see through four beautiful experiments. For instance, this may sound like science fiction, but today we will also see if it can recreate this scene from a true master of digital 3D art. So, what is this light thing I keep talking about? A light transport simulation means a computer program that is able to compute the path of light rays to create beautiful images like this, and this, and this. And our key problem is that initially we only get noisy images, and it can take a long time for the simulator to eliminate this noise. So, can DOLI2 help with that? Well, how? For instance, it can perform a variant generation where in goes one image and the AI synthesizes other similar images. This is really cool, as it means that the AI has a good understanding of what it sees and can create different variations of it. And, wait a minute, are you thinking what I am thinking? Oh, yes, experiment number one. De-noising. Let's give this noisy input image from a light transport simulator, give it to the variant generator, and see if it is able to recreate the essence of the image, but without the noise. Let's see. Well, that is super interesting. It did not denoise the image, but it did something else. It tried to understand what the noise is in this context and found it to be some sort of gold powder. How cool is that? Based on the insights gained here, let's try again with a little less noise. Oh, yes, this difficult scene would normally take even up to days to compute correctly. Do you see these light tricks here? We would need to clean those up. So, variant generation, it's your turn again. And, look at that. Wow, we get a noise free image that captured the essence of our input. I cannot believe it. So good. Interestingly, it did not ignore the light tricks, but it thought that this is the texture of the object and synthesize the new ones accordingly. This actually means that Dolly too does what it is supposed to be doing, faithfully reproducing the scene and putting a different spin on it. So cool. And I think this concept could be supercharged by generating such a noisy input quickly, then denoising it with one of those handcrafted techniques for these images. These are typically not perfect, but they may be just good enough to kickstart the variant generator. I would love to see some more detailed experiments in this direction. Now, what else can this do? Well, experiment number two, my favourite, caustics. Oh, yes, these are beautiful patterns of reflected light that we see a lot of in real life and they produce some of the most beautiful images, any light transport simulation can offer. Yes, that's right. With such a simulation, we can compute these two. How cool is that? So now, let's ask Dolly too to create some of these for us. And the results are truly sublime. So regular caustics checkmark. And what about those fun, hard-shaped caustics when we put a ring in the middle of an open book? My goodness, the AI understands that and it really works. Loving it. However, if you look at those beautiful volumetric caustics, when running variant generation on that, it only kind of works. There are some rays of hope here, but otherwise, I feel that the AI thinks that this is some sort of laser experiment instead. And also, don't forget about Daniel Baskin's amazing results who created these drinks. But wait, we are light transport researchers here, so we don't look at the drink. What do we look at? Yes, of course, the caustics. Beautiful. And if we are looking at beautiful things, time for experiment number three, subsurface scattering. What is that? Oh boy, subsurface scattering is the beautiful effect of light penetrating our skin, milk, and other materials, and bouncing inside before coming out again. The lack of this effect is why the skin looks a little plasticky in older video games. However, light transport simulation researchers took care of that too. This is from our earlier paper with the Activision Blizzard Game Development Company. This is the same phenomenon, a simulation without subsurface scattering. And this one is with simulating this effect. And in real time. Beautiful. You can find the link to this paper in the video description. So, can an AI pull this off today? That's impossible, right? Well, it seems so. If I plainly ask for subsurface scattering from Dolly2, I did not get any of that. However, when prompting a text to image AI, we have to know not only what we wish to see, but how to get it out of the algorithm. So, if we ask for translucent objects with strong backlighting, bingo, Dolly2 can do this too. So good. Loving it. And now, hold onto your papers, because now is the time for our final experiment. Experiment number four, reproducing the work of a true master. If the previous experiment was nearly impossible, I really don't know what this is. Here is a beautiful little virtual world from Reynante Martinez, and it really speaks for itself. Now, let's put it into the variant generator, and see what Dolly2 is made of. Wow! Look at that. These are incredibly good. Not as good as the master himself, but I think the first law of papers should be invoked here. Wait, what is that? The first law of papers says that research is a process. Don't not look at where we are, look at where we will be two more papers down the line. And two more papers down the line, I have to say. I can imagine that we will get comparable images. I also love how it thinks that fingerprints are part of the liquid. It is a bit of a limitation, but a really beautiful one. What a time to be alive! And we haven't even talked about indirect illumination, dispersion, and many other amazing light transport effects. I really hope we will see some more experiments perhaps from you fellow scholars in this direction too. By the way, I have a master level light transport simulation course for all of you, free of charge, no strings attached, and we write a beautiful little simulator that can create this image and more. The link is in the video description. This episode has been supported by Ranway, professional and magical AI video editing for everyone. I often hear you fellow scholars asking, okay, these AI techniques look great, but when do I get to use them? And the answer is, right now, Ranway is an amazing video editor that can do many of the things that you see here in this series. For instance, it can automatically replace the background behind the person. It can do in-painting for videos amazingly well, and can do even text to image, image to image, you name it. No wonder it is used by editors, post-production teams, and creators at companies like CBS, Google, Vox, and many other. Make sure to go to RanwayML.com, slash papers, sign up, and try it for free today. And here comes the best part, use the code two minute at checkout, and get 10% off your first month. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jolna Ifehir."}, {"start": 4.5600000000000005, "end": 7.84, "text": " Finally, this is my Happy Episode."}, {"start": 7.84, "end": 16.12, "text": " Well, of course, I am happy in every episode, but this is going to be my Happy Happy Episode, if you will."}, {"start": 16.12, "end": 17.36, "text": " Why is that?"}, {"start": 17.36, "end": 27.6, "text": " Well, buckle up, because today we are going to use OpenAI's DOLI2 a text to image AI, and we will see what it is made of."}, {"start": 27.6, "end": 32.08, "text": " Can it create beautiful light transport effects or not?"}, {"start": 32.08, "end": 35.6, "text": " We will see through four beautiful experiments."}, {"start": 35.6, "end": 47.44, "text": " For instance, this may sound like science fiction, but today we will also see if it can recreate this scene from a true master of digital 3D art."}, {"start": 47.44, "end": 51.040000000000006, "text": " So, what is this light thing I keep talking about?"}, {"start": 51.04, "end": 63.36, "text": " A light transport simulation means a computer program that is able to compute the path of light rays to create beautiful images like this, and this, and this."}, {"start": 63.36, "end": 74.0, "text": " And our key problem is that initially we only get noisy images, and it can take a long time for the simulator to eliminate this noise."}, {"start": 74.0, "end": 77.03999999999999, "text": " So, can DOLI2 help with that?"}, {"start": 77.04, "end": 88.32000000000001, "text": " Well, how? For instance, it can perform a variant generation where in goes one image and the AI synthesizes other similar images."}, {"start": 88.32000000000001, "end": 97.60000000000001, "text": " This is really cool, as it means that the AI has a good understanding of what it sees and can create different variations of it."}, {"start": 97.60000000000001, "end": 102.24000000000001, "text": " And, wait a minute, are you thinking what I am thinking?"}, {"start": 102.24000000000001, "end": 105.2, "text": " Oh, yes, experiment number one."}, {"start": 105.2, "end": 120.48, "text": " De-noising. Let's give this noisy input image from a light transport simulator, give it to the variant generator, and see if it is able to recreate the essence of the image, but without the noise."}, {"start": 120.48, "end": 128.96, "text": " Let's see. Well, that is super interesting. It did not denoise the image, but it did something else."}, {"start": 128.96, "end": 137.20000000000002, "text": " It tried to understand what the noise is in this context and found it to be some sort of gold powder."}, {"start": 137.20000000000002, "end": 144.64000000000001, "text": " How cool is that? Based on the insights gained here, let's try again with a little less noise."}, {"start": 144.64000000000001, "end": 151.28, "text": " Oh, yes, this difficult scene would normally take even up to days to compute correctly."}, {"start": 151.28, "end": 156.08, "text": " Do you see these light tricks here? We would need to clean those up."}, {"start": 156.08, "end": 159.68, "text": " So, variant generation, it's your turn again."}, {"start": 159.68, "end": 167.60000000000002, "text": " And, look at that. Wow, we get a noise free image that captured the essence of our input."}, {"start": 167.60000000000002, "end": 181.44, "text": " I cannot believe it. So good. Interestingly, it did not ignore the light tricks, but it thought that this is the texture of the object and synthesize the new ones accordingly."}, {"start": 181.44, "end": 191.12, "text": " This actually means that Dolly too does what it is supposed to be doing, faithfully reproducing the scene and putting a different spin on it."}, {"start": 191.12, "end": 200.64, "text": " So cool. And I think this concept could be supercharged by generating such a noisy input quickly,"}, {"start": 200.64, "end": 205.6, "text": " then denoising it with one of those handcrafted techniques for these images."}, {"start": 205.6, "end": 212.88, "text": " These are typically not perfect, but they may be just good enough to kickstart the variant generator."}, {"start": 212.88, "end": 217.28, "text": " I would love to see some more detailed experiments in this direction."}, {"start": 217.28, "end": 224.88, "text": " Now, what else can this do? Well, experiment number two, my favourite, caustics."}, {"start": 224.88, "end": 234.79999999999998, "text": " Oh, yes, these are beautiful patterns of reflected light that we see a lot of in real life and they produce some of the most beautiful images,"}, {"start": 234.8, "end": 242.4, "text": " any light transport simulation can offer. Yes, that's right. With such a simulation, we can compute these two."}, {"start": 242.4, "end": 249.20000000000002, "text": " How cool is that? So now, let's ask Dolly too to create some of these for us."}, {"start": 249.20000000000002, "end": 255.36, "text": " And the results are truly sublime. So regular caustics checkmark."}, {"start": 255.36, "end": 262.64, "text": " And what about those fun, hard-shaped caustics when we put a ring in the middle of an open book?"}, {"start": 262.64, "end": 267.76, "text": " My goodness, the AI understands that and it really works."}, {"start": 267.76, "end": 272.96, "text": " Loving it. However, if you look at those beautiful volumetric caustics,"}, {"start": 272.96, "end": 277.84, "text": " when running variant generation on that, it only kind of works."}, {"start": 277.84, "end": 286.88, "text": " There are some rays of hope here, but otherwise, I feel that the AI thinks that this is some sort of laser experiment instead."}, {"start": 286.88, "end": 292.96, "text": " And also, don't forget about Daniel Baskin's amazing results who created these drinks."}, {"start": 292.96, "end": 298.71999999999997, "text": " But wait, we are light transport researchers here, so we don't look at the drink."}, {"start": 298.71999999999997, "end": 304.0, "text": " What do we look at? Yes, of course, the caustics. Beautiful."}, {"start": 304.0, "end": 311.36, "text": " And if we are looking at beautiful things, time for experiment number three, subsurface scattering."}, {"start": 311.36, "end": 319.28000000000003, "text": " What is that? Oh boy, subsurface scattering is the beautiful effect of light penetrating our skin,"}, {"start": 319.28000000000003, "end": 325.28000000000003, "text": " milk, and other materials, and bouncing inside before coming out again."}, {"start": 325.28000000000003, "end": 331.68, "text": " The lack of this effect is why the skin looks a little plasticky in older video games."}, {"start": 331.68, "end": 336.08000000000004, "text": " However, light transport simulation researchers took care of that too."}, {"start": 336.08, "end": 341.2, "text": " This is from our earlier paper with the Activision Blizzard Game Development Company."}, {"start": 341.2, "end": 346.24, "text": " This is the same phenomenon, a simulation without subsurface scattering."}, {"start": 346.24, "end": 351.68, "text": " And this one is with simulating this effect. And in real time."}, {"start": 351.68, "end": 356.15999999999997, "text": " Beautiful. You can find the link to this paper in the video description."}, {"start": 356.15999999999997, "end": 361.44, "text": " So, can an AI pull this off today? That's impossible, right?"}, {"start": 361.44, "end": 369.28, "text": " Well, it seems so. If I plainly ask for subsurface scattering from Dolly2, I did not get any of that."}, {"start": 369.28, "end": 376.0, "text": " However, when prompting a text to image AI, we have to know not only what we wish to see,"}, {"start": 376.0, "end": 384.08, "text": " but how to get it out of the algorithm. So, if we ask for translucent objects with strong backlighting,"}, {"start": 384.08, "end": 392.47999999999996, "text": " bingo, Dolly2 can do this too. So good. Loving it. And now, hold onto your papers, because now is the"}, {"start": 392.47999999999996, "end": 400.15999999999997, "text": " time for our final experiment. Experiment number four, reproducing the work of a true master."}, {"start": 400.15999999999997, "end": 405.59999999999997, "text": " If the previous experiment was nearly impossible, I really don't know what this is."}, {"start": 405.59999999999997, "end": 412.56, "text": " Here is a beautiful little virtual world from Reynante Martinez, and it really speaks for itself."}, {"start": 412.56, "end": 418.96, "text": " Now, let's put it into the variant generator, and see what Dolly2 is made of."}, {"start": 419.68, "end": 426.8, "text": " Wow! Look at that. These are incredibly good. Not as good as the master himself,"}, {"start": 426.8, "end": 433.04, "text": " but I think the first law of papers should be invoked here. Wait, what is that?"}, {"start": 433.04, "end": 438.88, "text": " The first law of papers says that research is a process. Don't not look at where we are,"}, {"start": 438.88, "end": 444.71999999999997, "text": " look at where we will be two more papers down the line. And two more papers down the line,"}, {"start": 444.71999999999997, "end": 452.71999999999997, "text": " I have to say. I can imagine that we will get comparable images. I also love how it thinks that"}, {"start": 452.71999999999997, "end": 459.76, "text": " fingerprints are part of the liquid. It is a bit of a limitation, but a really beautiful one."}, {"start": 459.76, "end": 466.08, "text": " What a time to be alive! And we haven't even talked about indirect illumination, dispersion,"}, {"start": 466.08, "end": 472.47999999999996, "text": " and many other amazing light transport effects. I really hope we will see some more experiments"}, {"start": 472.47999999999996, "end": 478.88, "text": " perhaps from you fellow scholars in this direction too. By the way, I have a master level light"}, {"start": 478.88, "end": 485.2, "text": " transport simulation course for all of you, free of charge, no strings attached, and we write"}, {"start": 485.2, "end": 492.08, "text": " a beautiful little simulator that can create this image and more. The link is in the video description."}, {"start": 492.08, "end": 499.52, "text": " This episode has been supported by Ranway, professional and magical AI video editing for everyone."}, {"start": 499.52, "end": 506.88, "text": " I often hear you fellow scholars asking, okay, these AI techniques look great, but when do I get"}, {"start": 506.88, "end": 513.92, "text": " to use them? And the answer is, right now, Ranway is an amazing video editor that can do many of"}, {"start": 513.92, "end": 520.0799999999999, "text": " the things that you see here in this series. For instance, it can automatically replace the"}, {"start": 520.08, "end": 528.24, "text": " background behind the person. It can do in-painting for videos amazingly well, and can do even text to"}, {"start": 528.24, "end": 534.96, "text": " image, image to image, you name it. No wonder it is used by editors, post-production teams,"}, {"start": 534.96, "end": 543.76, "text": " and creators at companies like CBS, Google, Vox, and many other. Make sure to go to RanwayML.com,"}, {"start": 543.76, "end": 551.04, "text": " slash papers, sign up, and try it for free today. And here comes the best part, use the code"}, {"start": 551.04, "end": 557.36, "text": " two minute at checkout, and get 10% off your first month. Thanks for watching and for your generous"}, {"start": 557.36, "end": 587.2, "text": " support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hPR5kU91Ef4
Google’s New AI Learns Table Tennis! 🏓
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "i-Sim2Real: Reinforcement Learning of Robotic Policies in Tight Human-Robot Interaction Loops" is available here: https://sites.google.com/view/is2r https://twitter.com/lgraesser3/status/1547942995139301376 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/table-tennis-passion-sport-1208385/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karajol Naifahir. Do you see this new table tennis robot? It barely played any games in the real world, yet it can return the ball more than a hundred times without failing. Wow! So, how is this even possible? Well, this is a seem-torial paper, which means that first the robot starts learning in a simulation. Open AI did this earlier by teaching the robot hand in a simulated environment to manipulate this ruby cube and Tesla also trains its cars in a computer simulation. Why? Well, in the real world some things are possible, but in a simulated world anything is possible. Yes, even this. And the self-driving car can safely train in this environment, and when it is ready, it can be safely brought into the real world. How cool is that? Now how do we apply this concept to table tennis? Hmm, well, in this case the robot would not move, but it would play a computer game in its head if you will. But not so fast. That is impossible. What are we simulating exactly? The machine doesn't even know how humans play. There is no one to play against. Now check this out. To solve this, first the robot asks for some human data. Look, it won't do anything, it just observes how we play. And it only requires the short sequences. Then, it builds a model of how we play and embeds us into a computer simulation where it plays against us over and over again without any real physical movement. It is training the brain, if you will. And now comes the key step. This knowledge from the computer simulation is now transferred to the real robot. And now, let's see if this computer game knowledge really translates to the real world. So can it return this ball? It can? Well, kind of. One more time. Okay, better. And now, well, it missed again. I see some signs of learning here, but this is not great. So is that it? So much for learning in a simulation and bringing this knowledge into the real world. Right? Well, do not despair because there is still hope. What can we do? Well, now it knows how it failed and how it interacted with the human. Yes, that is great. Why? Because it can feed this new knowledge back into the simulation. The simulation can now be fired up once again. And with all this knowledge, it can repeat until the simulation starts looking very similar to the real world. That is where the real fun begins. Why? Well, check this out. This is the previous version of this technique and as you see, this does not play well. So how about the new method? Now hold on to your papers and marvel at this rally. Ety-2 hits and not one mistake. This is so much better. Wow, this seem to real concept really works. And wait a minute, we are experienced fellow scholars here, so we have a question. If the training set was built from data when it played against this human being, does it really know how to play against only this person? Or did it obtain more general knowledge and can it play with others? Well, let's have a look. The robot hasn't played this person before. And let's see how the previous technique fares. Well, that was not a long rally. And neither is this one. And now let's see the new method. Oh my, this is so much better. It learns much more general information from the very limited human data it was given. So it can play really well with all kinds of players of different skill levels. Here you see a selection of them. And all this from learning in a computer game with just a tiny bit of human behavioral data. And it can even perform a rally of over a hundred hits. What a time to be alive. So does this get your mind going? What would you use this seem to real concept for? Let me know in the comments below. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Sign up and launch an instance and hold on to your papers because with Lambda GPU cloud, you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to lambda-labs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajol Naifahir."}, {"start": 4.6000000000000005, "end": 7.42, "text": " Do you see this new table tennis robot?"}, {"start": 7.42, "end": 14.200000000000001, "text": " It barely played any games in the real world, yet it can return the ball more than a hundred"}, {"start": 14.200000000000001, "end": 16.0, "text": " times without failing."}, {"start": 16.0, "end": 17.0, "text": " Wow!"}, {"start": 17.0, "end": 19.6, "text": " So, how is this even possible?"}, {"start": 19.6, "end": 26.28, "text": " Well, this is a seem-torial paper, which means that first the robot starts learning in"}, {"start": 26.28, "end": 27.6, "text": " a simulation."}, {"start": 27.6, "end": 33.88, "text": " Open AI did this earlier by teaching the robot hand in a simulated environment to manipulate"}, {"start": 33.88, "end": 39.92, "text": " this ruby cube and Tesla also trains its cars in a computer simulation."}, {"start": 39.92, "end": 40.92, "text": " Why?"}, {"start": 40.92, "end": 48.400000000000006, "text": " Well, in the real world some things are possible, but in a simulated world anything is possible."}, {"start": 48.400000000000006, "end": 50.88, "text": " Yes, even this."}, {"start": 50.88, "end": 56.400000000000006, "text": " And the self-driving car can safely train in this environment, and when it is ready, it"}, {"start": 56.4, "end": 59.68, "text": " can be safely brought into the real world."}, {"start": 59.68, "end": 61.519999999999996, "text": " How cool is that?"}, {"start": 61.519999999999996, "end": 66.2, "text": " Now how do we apply this concept to table tennis?"}, {"start": 66.2, "end": 72.94, "text": " Hmm, well, in this case the robot would not move, but it would play a computer game in"}, {"start": 72.94, "end": 74.96, "text": " its head if you will."}, {"start": 74.96, "end": 77.16, "text": " But not so fast."}, {"start": 77.16, "end": 78.96, "text": " That is impossible."}, {"start": 78.96, "end": 81.08, "text": " What are we simulating exactly?"}, {"start": 81.08, "end": 84.6, "text": " The machine doesn't even know how humans play."}, {"start": 84.6, "end": 87.08, "text": " There is no one to play against."}, {"start": 87.08, "end": 88.91999999999999, "text": " Now check this out."}, {"start": 88.91999999999999, "end": 93.52, "text": " To solve this, first the robot asks for some human data."}, {"start": 93.52, "end": 99.16, "text": " Look, it won't do anything, it just observes how we play."}, {"start": 99.16, "end": 102.0, "text": " And it only requires the short sequences."}, {"start": 102.0, "end": 108.91999999999999, "text": " Then, it builds a model of how we play and embeds us into a computer simulation where"}, {"start": 108.92, "end": 115.0, "text": " it plays against us over and over again without any real physical movement."}, {"start": 115.0, "end": 117.72, "text": " It is training the brain, if you will."}, {"start": 117.72, "end": 119.68, "text": " And now comes the key step."}, {"start": 119.68, "end": 124.96000000000001, "text": " This knowledge from the computer simulation is now transferred to the real robot."}, {"start": 124.96000000000001, "end": 130.76, "text": " And now, let's see if this computer game knowledge really translates to the real world."}, {"start": 130.76, "end": 133.28, "text": " So can it return this ball?"}, {"start": 133.28, "end": 134.28, "text": " It can?"}, {"start": 134.28, "end": 136.4, "text": " Well, kind of."}, {"start": 136.4, "end": 137.4, "text": " One more time."}, {"start": 137.4, "end": 139.68, "text": " Okay, better."}, {"start": 139.68, "end": 142.64000000000001, "text": " And now, well, it missed again."}, {"start": 142.64000000000001, "end": 147.4, "text": " I see some signs of learning here, but this is not great."}, {"start": 147.4, "end": 149.16, "text": " So is that it?"}, {"start": 149.16, "end": 154.4, "text": " So much for learning in a simulation and bringing this knowledge into the real world."}, {"start": 154.4, "end": 155.4, "text": " Right?"}, {"start": 155.4, "end": 159.56, "text": " Well, do not despair because there is still hope."}, {"start": 159.56, "end": 160.56, "text": " What can we do?"}, {"start": 160.56, "end": 166.44, "text": " Well, now it knows how it failed and how it interacted with the human."}, {"start": 166.44, "end": 168.68, "text": " Yes, that is great."}, {"start": 168.68, "end": 170.07999999999998, "text": " Why?"}, {"start": 170.07999999999998, "end": 174.76, "text": " Because it can feed this new knowledge back into the simulation."}, {"start": 174.76, "end": 178.6, "text": " The simulation can now be fired up once again."}, {"start": 178.6, "end": 185.04, "text": " And with all this knowledge, it can repeat until the simulation starts looking very similar"}, {"start": 185.04, "end": 186.76, "text": " to the real world."}, {"start": 186.76, "end": 189.16, "text": " That is where the real fun begins."}, {"start": 189.16, "end": 190.16, "text": " Why?"}, {"start": 190.16, "end": 192.04, "text": " Well, check this out."}, {"start": 192.04, "end": 198.04, "text": " This is the previous version of this technique and as you see, this does not play well."}, {"start": 198.04, "end": 200.44, "text": " So how about the new method?"}, {"start": 200.44, "end": 204.95999999999998, "text": " Now hold on to your papers and marvel at this rally."}, {"start": 204.95999999999998, "end": 209.32, "text": " Ety-2 hits and not one mistake."}, {"start": 209.32, "end": 211.12, "text": " This is so much better."}, {"start": 211.12, "end": 215.64, "text": " Wow, this seem to real concept really works."}, {"start": 215.64, "end": 222.0, "text": " And wait a minute, we are experienced fellow scholars here, so we have a question."}, {"start": 222.0, "end": 227.44, "text": " If the training set was built from data when it played against this human being, does"}, {"start": 227.44, "end": 231.68, "text": " it really know how to play against only this person?"}, {"start": 231.68, "end": 236.56, "text": " Or did it obtain more general knowledge and can it play with others?"}, {"start": 236.56, "end": 238.56, "text": " Well, let's have a look."}, {"start": 238.56, "end": 241.76, "text": " The robot hasn't played this person before."}, {"start": 241.76, "end": 245.2, "text": " And let's see how the previous technique fares."}, {"start": 245.2, "end": 248.84, "text": " Well, that was not a long rally."}, {"start": 248.84, "end": 250.72, "text": " And neither is this one."}, {"start": 250.72, "end": 253.52, "text": " And now let's see the new method."}, {"start": 253.52, "end": 257.4, "text": " Oh my, this is so much better."}, {"start": 257.4, "end": 263.44, "text": " It learns much more general information from the very limited human data it was given."}, {"start": 263.44, "end": 269.16, "text": " So it can play really well with all kinds of players of different skill levels."}, {"start": 269.16, "end": 271.32, "text": " Here you see a selection of them."}, {"start": 271.32, "end": 277.04, "text": " And all this from learning in a computer game with just a tiny bit of human behavioral"}, {"start": 277.04, "end": 278.2, "text": " data."}, {"start": 278.2, "end": 283.2, "text": " And it can even perform a rally of over a hundred hits."}, {"start": 283.2, "end": 284.92, "text": " What a time to be alive."}, {"start": 284.92, "end": 287.12, "text": " So does this get your mind going?"}, {"start": 287.12, "end": 290.28, "text": " What would you use this seem to real concept for?"}, {"start": 290.28, "end": 292.08, "text": " Let me know in the comments below."}, {"start": 292.08, "end": 299.03999999999996, "text": " If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices"}, {"start": 299.03999999999996, "end": 302.48, "text": " in the world for GPU cloud compute."}, {"start": 302.48, "end": 305.44, "text": " No commitments or negotiation required."}, {"start": 305.44, "end": 312.76, "text": " Sign up and launch an instance and hold on to your papers because with Lambda GPU cloud,"}, {"start": 312.76, "end": 323.04, "text": " you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 323.04, "end": 325.64, "text": " That's 73% savings."}, {"start": 325.64, "end": 329.12, "text": " Did I mention they also offer persistent storage?"}, {"start": 329.12, "end": 337.28000000000003, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances,"}, {"start": 337.28000000000003, "end": 339.64, "text": " workstations or servers."}, {"start": 339.64, "end": 346.04, "text": " Make sure to go to lambda-labs.com slash papers to sign up for one of their amazing GPU"}, {"start": 346.04, "end": 347.32, "text": " instances today."}, {"start": 347.32, "end": 349.72, "text": " Thanks for watching and for your generous support."}, {"start": 349.72, "end": 359.72, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bT8e1EV5-ic
Stable Diffusion Is Getting Outrageously Good! 🤯
❤️ Check out Anyscale and try it for free here: https://www.anyscale.com/papers 📝 The paper "High-Resolution Image Synthesis with Latent Diffusion Models" is available here: https://ommer-lab.com/research/latent-diffusion-models/ https://github.com/mallorbc/stable-diffusion-klms-gui You can also try Stable diffusion for free here: https://huggingface.co/spaces/stabilityai/stable-diffusion Credits: 1. Prompt-image repository https://lexica.art + Variants from photos https://twitter.com/sharifshameem/status/157177206133663334 2. Infinite zoom https://twitter.com/matthen2/status/1564608723636654093 + how to do it https://twitter.com/matthen2/status/1564608773485895692 3. Lego to reality https://twitter.com/matthen2/status/156609779409551360 5. 2D to 3D https://twitter.com/thibaudz/status/1566136808504786949 6. Cat Knight https://hostux.social/@valere/108939000926741542 7. Drawing to image https://www.reddit.com/r/MachineLearning/comments/x5dwm5/p_apple_pencil_with_the_power_of_local_stable/ 8. Image to image https://sciprogramming.com/community/index.php?topic=2081.0 9. Variant generation easier https://twitter.com/Buntworthy/status/1566744186153484288 + https://github.com/justinpinkney/stable-diffusion + https://github.com/gradio-app/gradio 10. Texture synthesis https://twitter.com/metasemantic/status/1568997322805100547 + https://twitter.com/dekapoppo/status/1571913696523489280 11. Newer version of stable_diffusion_videos - https://twitter.com/_nateraw/status/1569315074824871936 Interpolation: https://twitter.com/xsteenbrugge/status/1558508866463219712 Full video of interpolation: https://www.youtube.com/watch?v=Bo3VZCjDhGI Thumbnail source images: Anjney Midha 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 Intro 0:30 Stable Diffusion 1:20 AI art repository 2:14 Infinite zoom 2:24 Lego to reality 2:52 Creating 3D images 3:16 Cat Knight 3:50 The rest of the owl 4:03 Image to image! 4:43 Variant generation 5:02 Texture synthesis 5:55 Stable Diffusion video generator 6:20 Free and open for all of us! Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zone Fahir. Today, you will see the power of human ingenuity supercharged by an AI. As we are living the advent of AI-based image generation, we now have several tools that are so easy to use, we just enter a piece of text and out comes a beautiful image. Now, you're asking, okay Karo, they are easy to use, but for whom? Well, good news. We now have a new solution called stable diffusion, where the model weights and the full source code are available for free for everyone. Finally, we talked a bit about this before, and once again, I cannot overstate how amazing this is. I am completely spellbound by how the community has worked together to bring this project to life, and you fellow scholars just kept on improving it, and it harnesses the greatest asset of humanity, and that is the power of the community working together, and I cannot believe how much stable diffusion has improved in just the last few weeks. Don't believe it? Well, let's have a look together, and 10 amazing examples of how the community is already using it. One, today, the image generation works so inexpensively that we don't even need to necessarily generate our own. We can even look at this amazing repository where we enter the prompt and can find thousands and thousands of generated images for death concept. Yes, even for Napoleon cats, we have thousands of hits. So good. Now, additionally, we can also add a twist to it by photographing something in real life, obtaining a text prompt for it, and bam! It finds similar images that were synthesized by stable diffusion. This is very a generation of sorts, but piggybacking on images that have been synthesized already, therefore we can choose from a large gallery of these works. Two, by using a little trickery and the image-impaining feature, we can now create these amazing infinite zoom images. So good! Three, whenever we build something really cool with Legos, we can now ask stable diffusion to reimagine what it would look like if it were a real object. The results are by no means perfect, but based on what it comes up with, it really seems to understand what is being built here and what its real counterpart would look like. I love it! Four, after generating a flat, 2D image with stable diffusion, with other techniques, we can obtain a depth map which describes how four different objects are from the camera. Now that is something that we've seen before. However, now in Adobe's After Effects, look, we can create this little video with a parallax effect. Absolutely incredible! Five, have a look at this catnite. I love the eyes and all of these gorgeous details on the armor. This image really tells the story, but what is even better is that not only the prompt is available, but also stable diffusion is a free and open source model, so we can pop the hood, reuse the same parameters as the author, and get a reproduction of the very same image. And it is also much easier to edit it this way if we wish to see anything changed. Six, if we are not the most skilled artist, we can draw a really rudimentary owl, handed to the AI, and it will draw the rest of this fine owl. Seven, and if you think the drawing to image example was amazing, now hold onto your papers for this one. This fellow scholar had a crazy idea. Look, these screenshots of old Sierra video games were given to the algorithm, and there is no way, right? Well, let's see. Oh wow! Look at that! The results are absolutely incredible. I love how closely it follows the framing and the mood of the original photos. I have to be honest, some of these feel good to go as they are. What a time to be alive! Eight, with these new web apps, variant generation is now way easier and faster than before. It is now as simple as dropping in an image. By the way, a link to each of these is available in the video description, and their source code is available as well. Nine, in an earlier episode, we had a look at how artists are already using Dolly II in the industry to make a photo of something and miraculously, extended almost infinitely. This is called texture synthesis, and no seems anywhere to be seen. And now, deep fellow scholars, seamless texture generation, is now possible, in stable diffusion too. Not too many years ago, we needed not only a proper handcrafted computer graphics algorithm to even have a fighting chance to create something like this, but implementing a bunch of these techniques was also required because different algorithms did well on different examples. And now, just one tool that can do it all. How cool is that? And 10, stable diffusion itself is also being improved. Oh yes, this new version adds super-resolution to the mix which enables us to synthesize even more details and even higher resolution images with it. This thing is improving so quickly, we can barely keep up with it. So, which one was your favorite? Let me know in the comments below. And once again, this is my favorite type of work, which is free and open for everyone. So, I would like you fellow scholars to also take out your digital wrenches and create something new and amazing. Let the experiments begin. This episode is brought to you by AnySkill. The company behind Ray, the fastest growing open source framework for scalable AI and scalable Python. Thousands of organizations use Ray, including open AI, Uber, Amazon, Spotify, Netflix, and more. Ray, less developers, iterate faster by providing common infrastructure for scaling data in just and pre-processing, machine learning training, deep learning, hyperparameter tuning, model serving, and more. All while integrating seamlessly with the rest of the machine learning ecosystem. AnySkill is a fully managed Ray platform that allows teams to bring products to market faster by eliminating the need to manage infrastructure and by enabling new AI capabilities. Ray and AnySkill can do recommendation systems, time series forecasting, document understanding, image processing, industrial automation, and more. Go to AnySkill.com slash peepers and try it out today. Our thanks to AnySkill for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zone Fahir."}, {"start": 4.5600000000000005, "end": 11.040000000000001, "text": " Today, you will see the power of human ingenuity supercharged by an AI."}, {"start": 11.040000000000001, "end": 14.88, "text": " As we are living the advent of AI-based image generation,"}, {"start": 14.88, "end": 19.04, "text": " we now have several tools that are so easy to use,"}, {"start": 19.04, "end": 24.16, "text": " we just enter a piece of text and out comes a beautiful image."}, {"start": 24.16, "end": 29.76, "text": " Now, you're asking, okay Karo, they are easy to use, but for whom?"}, {"start": 29.76, "end": 34.96, "text": " Well, good news. We now have a new solution called stable diffusion,"}, {"start": 34.96, "end": 41.04, "text": " where the model weights and the full source code are available for free for everyone."}, {"start": 41.04, "end": 48.24, "text": " Finally, we talked a bit about this before, and once again, I cannot overstate how amazing this is."}, {"start": 48.24, "end": 55.040000000000006, "text": " I am completely spellbound by how the community has worked together to bring this project to life,"}, {"start": 55.04, "end": 62.64, "text": " and you fellow scholars just kept on improving it, and it harnesses the greatest asset of humanity,"}, {"start": 62.64, "end": 66.88, "text": " and that is the power of the community working together,"}, {"start": 66.88, "end": 73.84, "text": " and I cannot believe how much stable diffusion has improved in just the last few weeks."}, {"start": 73.84, "end": 76.88, "text": " Don't believe it? Well, let's have a look together,"}, {"start": 76.88, "end": 82.16, "text": " and 10 amazing examples of how the community is already using it."}, {"start": 82.16, "end": 91.36, "text": " One, today, the image generation works so inexpensively that we don't even need to necessarily generate our own."}, {"start": 91.36, "end": 102.08, "text": " We can even look at this amazing repository where we enter the prompt and can find thousands and thousands of generated images for death concept."}, {"start": 102.08, "end": 108.0, "text": " Yes, even for Napoleon cats, we have thousands of hits. So good."}, {"start": 108.0, "end": 114.4, "text": " Now, additionally, we can also add a twist to it by photographing something in real life,"}, {"start": 114.4, "end": 117.76, "text": " obtaining a text prompt for it, and bam!"}, {"start": 117.76, "end": 122.72, "text": " It finds similar images that were synthesized by stable diffusion."}, {"start": 122.72, "end": 130.08, "text": " This is very a generation of sorts, but piggybacking on images that have been synthesized already,"}, {"start": 130.08, "end": 134.08, "text": " therefore we can choose from a large gallery of these works."}, {"start": 134.08, "end": 138.88000000000002, "text": " Two, by using a little trickery and the image-impaining feature,"}, {"start": 138.88000000000002, "end": 143.44, "text": " we can now create these amazing infinite zoom images."}, {"start": 143.44, "end": 144.48000000000002, "text": " So good!"}, {"start": 144.48000000000002, "end": 148.48000000000002, "text": " Three, whenever we build something really cool with Legos,"}, {"start": 148.48000000000002, "end": 155.76000000000002, "text": " we can now ask stable diffusion to reimagine what it would look like if it were a real object."}, {"start": 155.76000000000002, "end": 158.64000000000001, "text": " The results are by no means perfect,"}, {"start": 158.64, "end": 164.79999999999998, "text": " but based on what it comes up with, it really seems to understand what is being built here"}, {"start": 164.79999999999998, "end": 167.67999999999998, "text": " and what its real counterpart would look like."}, {"start": 167.67999999999998, "end": 168.64, "text": " I love it!"}, {"start": 169.44, "end": 171.76, "text": " Four, after generating a flat,"}, {"start": 171.76, "end": 175.35999999999999, "text": " 2D image with stable diffusion, with other techniques,"}, {"start": 175.35999999999999, "end": 182.39999999999998, "text": " we can obtain a depth map which describes how four different objects are from the camera."}, {"start": 182.39999999999998, "end": 185.44, "text": " Now that is something that we've seen before."}, {"start": 185.44, "end": 189.12, "text": " However, now in Adobe's After Effects,"}, {"start": 189.12, "end": 193.76, "text": " look, we can create this little video with a parallax effect."}, {"start": 193.76, "end": 196.0, "text": " Absolutely incredible!"}, {"start": 196.0, "end": 198.88, "text": " Five, have a look at this catnite."}, {"start": 198.88, "end": 203.92, "text": " I love the eyes and all of these gorgeous details on the armor."}, {"start": 203.92, "end": 206.56, "text": " This image really tells the story,"}, {"start": 206.56, "end": 210.88, "text": " but what is even better is that not only the prompt is available,"}, {"start": 210.88, "end": 215.76, "text": " but also stable diffusion is a free and open source model,"}, {"start": 215.76, "end": 220.16, "text": " so we can pop the hood, reuse the same parameters as the author,"}, {"start": 220.16, "end": 224.0, "text": " and get a reproduction of the very same image."}, {"start": 224.0, "end": 230.07999999999998, "text": " And it is also much easier to edit it this way if we wish to see anything changed."}, {"start": 230.07999999999998, "end": 233.28, "text": " Six, if we are not the most skilled artist,"}, {"start": 233.28, "end": 236.4, "text": " we can draw a really rudimentary owl,"}, {"start": 236.4, "end": 241.12, "text": " handed to the AI, and it will draw the rest of this fine owl."}, {"start": 241.84, "end": 246.48000000000002, "text": " Seven, and if you think the drawing to image example was amazing,"}, {"start": 246.48000000000002, "end": 249.04000000000002, "text": " now hold onto your papers for this one."}, {"start": 249.04000000000002, "end": 252.64000000000001, "text": " This fellow scholar had a crazy idea."}, {"start": 252.64000000000001, "end": 258.16, "text": " Look, these screenshots of old Sierra video games were given to the algorithm,"}, {"start": 258.16, "end": 260.4, "text": " and there is no way, right?"}, {"start": 261.12, "end": 262.32, "text": " Well, let's see."}, {"start": 263.04, "end": 264.32, "text": " Oh wow!"}, {"start": 264.32, "end": 265.68, "text": " Look at that!"}, {"start": 265.68, "end": 268.8, "text": " The results are absolutely incredible."}, {"start": 268.8, "end": 275.12, "text": " I love how closely it follows the framing and the mood of the original photos."}, {"start": 275.12, "end": 279.84000000000003, "text": " I have to be honest, some of these feel good to go as they are."}, {"start": 279.84000000000003, "end": 281.84000000000003, "text": " What a time to be alive!"}, {"start": 283.04, "end": 285.28000000000003, "text": " Eight, with these new web apps,"}, {"start": 285.28000000000003, "end": 290.4, "text": " variant generation is now way easier and faster than before."}, {"start": 290.4, "end": 293.92, "text": " It is now as simple as dropping in an image."}, {"start": 293.92, "end": 298.40000000000003, "text": " By the way, a link to each of these is available in the video description,"}, {"start": 298.40000000000003, "end": 300.96000000000004, "text": " and their source code is available as well."}, {"start": 301.68, "end": 306.96000000000004, "text": " Nine, in an earlier episode, we had a look at how artists are already using"}, {"start": 306.96000000000004, "end": 311.20000000000005, "text": " Dolly II in the industry to make a photo of something"}, {"start": 311.20000000000005, "end": 315.52000000000004, "text": " and miraculously, extended almost infinitely."}, {"start": 315.52000000000004, "end": 321.44, "text": " This is called texture synthesis, and no seems anywhere to be seen."}, {"start": 321.44, "end": 325.6, "text": " And now, deep fellow scholars, seamless texture generation,"}, {"start": 325.6, "end": 328.96, "text": " is now possible, in stable diffusion too."}, {"start": 328.96, "end": 333.04, "text": " Not too many years ago, we needed not only a proper"}, {"start": 333.04, "end": 337.76, "text": " handcrafted computer graphics algorithm to even have a fighting chance"}, {"start": 337.76, "end": 339.52, "text": " to create something like this,"}, {"start": 339.52, "end": 343.68, "text": " but implementing a bunch of these techniques was also required"}, {"start": 343.68, "end": 348.0, "text": " because different algorithms did well on different examples."}, {"start": 348.0, "end": 351.52, "text": " And now, just one tool that can do it all."}, {"start": 351.52, "end": 352.88, "text": " How cool is that?"}, {"start": 354.0, "end": 358.72, "text": " And 10, stable diffusion itself is also being improved."}, {"start": 359.36, "end": 363.84, "text": " Oh yes, this new version adds super-resolution to the mix"}, {"start": 363.84, "end": 367.44, "text": " which enables us to synthesize even more details"}, {"start": 367.44, "end": 370.56, "text": " and even higher resolution images with it."}, {"start": 370.56, "end": 375.12, "text": " This thing is improving so quickly, we can barely keep up with it."}, {"start": 375.12, "end": 378.08, "text": " So, which one was your favorite?"}, {"start": 378.08, "end": 379.84000000000003, "text": " Let me know in the comments below."}, {"start": 379.84000000000003, "end": 383.52, "text": " And once again, this is my favorite type of work,"}, {"start": 383.52, "end": 386.48, "text": " which is free and open for everyone."}, {"start": 386.48, "end": 390.56, "text": " So, I would like you fellow scholars to also take out your digital"}, {"start": 390.56, "end": 394.48, "text": " wrenches and create something new and amazing."}, {"start": 394.48, "end": 396.56, "text": " Let the experiments begin."}, {"start": 396.56, "end": 399.68, "text": " This episode is brought to you by AnySkill."}, {"start": 399.68, "end": 404.08, "text": " The company behind Ray, the fastest growing open source framework"}, {"start": 404.08, "end": 408.08, "text": " for scalable AI and scalable Python."}, {"start": 408.08, "end": 410.47999999999996, "text": " Thousands of organizations use Ray,"}, {"start": 410.47999999999996, "end": 416.71999999999997, "text": " including open AI, Uber, Amazon, Spotify, Netflix, and more."}, {"start": 416.71999999999997, "end": 421.91999999999996, "text": " Ray, less developers, iterate faster by providing common infrastructure"}, {"start": 421.91999999999996, "end": 424.88, "text": " for scaling data in just and pre-processing,"}, {"start": 424.88, "end": 427.68, "text": " machine learning training, deep learning,"}, {"start": 427.68, "end": 431.59999999999997, "text": " hyperparameter tuning, model serving, and more."}, {"start": 431.6, "end": 437.28000000000003, "text": " All while integrating seamlessly with the rest of the machine learning ecosystem."}, {"start": 437.28000000000003, "end": 441.76000000000005, "text": " AnySkill is a fully managed Ray platform that allows teams"}, {"start": 441.76000000000005, "end": 445.76000000000005, "text": " to bring products to market faster by eliminating the need"}, {"start": 445.76000000000005, "end": 450.88, "text": " to manage infrastructure and by enabling new AI capabilities."}, {"start": 450.88, "end": 454.56, "text": " Ray and AnySkill can do recommendation systems,"}, {"start": 454.56, "end": 458.0, "text": " time series forecasting, document understanding,"}, {"start": 458.0, "end": 462.0, "text": " image processing, industrial automation, and more."}, {"start": 462.0, "end": 467.04, "text": " Go to AnySkill.com slash peepers and try it out today."}, {"start": 467.04, "end": 471.12, "text": " Our thanks to AnySkill for helping us make better videos for you."}, {"start": 471.12, "end": 473.44, "text": " Thanks for watching and for your generous support,"}, {"start": 473.44, "end": 488.88, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=k54cpsAbMn4
NVIDIA’s Amazing AI Clones Your Voice! 🤐
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "One TTS Alignment To Rule Them All" is available here: https://arxiv.org/abs/2108.10447 Early access: https://developer.nvidia.com/riva/studio-early-access 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia
And these fellow scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to clone real human voices using an AI. How? Well, in an earlier Nvidia keynote, we had a look at Jensen Jr. an AI-powered virtual assistant of Nvidia CEO Jensen Huang. It could do this. Synthetic biology is about designing biological systems at multiple levels from individual molecules. Look at the face of that problem. I love how it also uses hand gestures that go really well with the explanation. These virtual AI assistants are going to appear everywhere to help you with your daily tasks. For instance, in your car, the promise is that they will be able to recognize you as the owner of the car, recommend shows nearby, and even drive you there. These omniverse avatars may also help us order our favorite burgers too. And we won't even need to push buttons on a touchscreen. We just need to say what we wish to eat, and the assistant will answer and take our orders, perhaps later, even in a familiar person's voice. How cool is that? And today, I am going to ask you to imagine a future where we can all have our toy jensen's or our own virtual assistants with our own voice. All of us. That sounds really cool. So is that in the far future? No, not at all. Today, I have the amazing opportunity to show you a bit more about the AI that makes this voice synthesis happen. And yes, you will hear things that are only available here at two minute papers. So what is all this about? This work is an AI-based technique that takes samples of our voice and can then clone it. Let's give it a try. This is Jamil from NVIDIA who was kind enough to record these voice snippets. Listen. I think they have to change that. Further details are expected later. Okay, so how much of this do we need to train the AI? Well, not the entire life recordings of the test subject, but much less. See 30 minutes of these voice samples. The technique asks us to see these sentences and analyzes the timbre, prosody and the rhythm of our voice, which is quite a task. And what can it do afterwards? Well, hold on to your papers because Jamil, not the real one, the clone AI Jamil has a scholarly message for you that I wrote. This is a voice line generated by an AI. Here if you fell as scholars are going to notice, the first law of papers says that research is a process. Do not look at where we are. Look at where we will be two more papers down the line. The third law of papers says that a bad researcher fails 100% of the time, while a good one only fails 99% of the time. Hence, what you see here is always just 1% of the work that was done. So what do you think about these voice lines? Are they real? Or are they synthesized? Well, of course, it is not perfect. I think that most of you are able to tell that these voice lines were synthesized, but my opinion is that these are easily good enough for a helpful human-like virtual assistant. And really, how cool is that? Cloning a human voice from half an hour worth of sound samples. What a time to be alive. Note that I have been really tough with them, these are some long scholarly sentences that would give a challenge to any of these algorithms. Now, if you are one of our earlier fellow scholars, you might remember that a few years ago, we talked about an AI technique called Tachotron, which can perform voice cloning from a really short few second-long sample. I have only heard of simple shorter sentences for that, and what is new here is that this new technique takes more data, but in return offers higher quality. But it doesn't stop there. It does even more. This new method is easier to train, and it also generalizes to more languages better. And these are already good enough to be used in real products, so I wonder what a technique like this will look like just two more papers down the line. Maybe my voice here on two-minute papers could be synthesized by an AI. Maybe it already is. Would that be a good thing? What do you think? Also a virtual caroy for you to read your daily dose of papers. Hmm, actually, since Nvidia has a great track record of putting these amazing tools into everyone's hands if you are interested, there is already an early access program where you can apply. I hope that some of you will be able to try this and let us know in the comments below. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " And these fellow scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.72, "end": 10.16, "text": " Today, we are going to clone real human voices using an AI."}, {"start": 10.16, "end": 11.16, "text": " How?"}, {"start": 11.16, "end": 16.240000000000002, "text": " Well, in an earlier Nvidia keynote, we had a look at Jensen Jr."}, {"start": 16.240000000000002, "end": 21.52, "text": " an AI-powered virtual assistant of Nvidia CEO Jensen Huang."}, {"start": 21.52, "end": 23.32, "text": " It could do this."}, {"start": 23.32, "end": 29.32, "text": " Synthetic biology is about designing biological systems at multiple levels from individual molecules."}, {"start": 29.32, "end": 31.8, "text": " Look at the face of that problem."}, {"start": 31.8, "end": 38.480000000000004, "text": " I love how it also uses hand gestures that go really well with the explanation."}, {"start": 38.480000000000004, "end": 44.8, "text": " These virtual AI assistants are going to appear everywhere to help you with your daily tasks."}, {"start": 44.8, "end": 50.44, "text": " For instance, in your car, the promise is that they will be able to recognize you as the"}, {"start": 50.44, "end": 56.64, "text": " owner of the car, recommend shows nearby, and even drive you there."}, {"start": 56.64, "end": 62.28, "text": " These omniverse avatars may also help us order our favorite burgers too."}, {"start": 62.28, "end": 66.04, "text": " And we won't even need to push buttons on a touchscreen."}, {"start": 66.04, "end": 72.44, "text": " We just need to say what we wish to eat, and the assistant will answer and take our orders,"}, {"start": 72.44, "end": 76.44, "text": " perhaps later, even in a familiar person's voice."}, {"start": 76.44, "end": 78.2, "text": " How cool is that?"}, {"start": 78.2, "end": 84.28, "text": " And today, I am going to ask you to imagine a future where we can all have our toy jensen's"}, {"start": 84.28, "end": 88.64, "text": " or our own virtual assistants with our own voice."}, {"start": 88.64, "end": 89.8, "text": " All of us."}, {"start": 89.8, "end": 91.84, "text": " That sounds really cool."}, {"start": 91.84, "end": 94.24000000000001, "text": " So is that in the far future?"}, {"start": 94.24000000000001, "end": 96.28, "text": " No, not at all."}, {"start": 96.28, "end": 102.96000000000001, "text": " Today, I have the amazing opportunity to show you a bit more about the AI that makes this"}, {"start": 102.96000000000001, "end": 105.16, "text": " voice synthesis happen."}, {"start": 105.16, "end": 111.28, "text": " And yes, you will hear things that are only available here at two minute papers."}, {"start": 111.28, "end": 113.68, "text": " So what is all this about?"}, {"start": 113.68, "end": 120.28, "text": " This work is an AI-based technique that takes samples of our voice and can then clone it."}, {"start": 120.28, "end": 122.2, "text": " Let's give it a try."}, {"start": 122.2, "end": 127.2, "text": " This is Jamil from NVIDIA who was kind enough to record these voice snippets."}, {"start": 127.2, "end": 128.20000000000002, "text": " Listen."}, {"start": 128.20000000000002, "end": 130.48000000000002, "text": " I think they have to change that."}, {"start": 130.48000000000002, "end": 132.44, "text": " Further details are expected later."}, {"start": 132.44, "end": 137.24, "text": " Okay, so how much of this do we need to train the AI?"}, {"start": 137.24, "end": 143.44, "text": " Well, not the entire life recordings of the test subject, but much less."}, {"start": 143.44, "end": 146.44, "text": " See 30 minutes of these voice samples."}, {"start": 146.44, "end": 153.8, "text": " The technique asks us to see these sentences and analyzes the timbre, prosody and the rhythm"}, {"start": 153.8, "end": 157.2, "text": " of our voice, which is quite a task."}, {"start": 157.2, "end": 159.24, "text": " And what can it do afterwards?"}, {"start": 159.24, "end": 167.48, "text": " Well, hold on to your papers because Jamil, not the real one, the clone AI Jamil has a scholarly"}, {"start": 167.48, "end": 169.8, "text": " message for you that I wrote."}, {"start": 169.8, "end": 172.4, "text": " This is a voice line generated by an AI."}, {"start": 172.4, "end": 176.64000000000001, "text": " Here if you fell as scholars are going to notice, the first law of papers says that research"}, {"start": 176.64000000000001, "end": 177.64000000000001, "text": " is a process."}, {"start": 177.64000000000001, "end": 179.32, "text": " Do not look at where we are."}, {"start": 179.32, "end": 181.92000000000002, "text": " Look at where we will be two more papers down the line."}, {"start": 181.92000000000002, "end": 187.28, "text": " The third law of papers says that a bad researcher fails 100% of the time, while a good one only"}, {"start": 187.28, "end": 189.68, "text": " fails 99% of the time."}, {"start": 189.68, "end": 194.20000000000002, "text": " Hence, what you see here is always just 1% of the work that was done."}, {"start": 194.20000000000002, "end": 197.16, "text": " So what do you think about these voice lines?"}, {"start": 197.16, "end": 198.16, "text": " Are they real?"}, {"start": 198.16, "end": 199.92000000000002, "text": " Or are they synthesized?"}, {"start": 199.92, "end": 202.32, "text": " Well, of course, it is not perfect."}, {"start": 202.32, "end": 207.95999999999998, "text": " I think that most of you are able to tell that these voice lines were synthesized, but"}, {"start": 207.95999999999998, "end": 214.95999999999998, "text": " my opinion is that these are easily good enough for a helpful human-like virtual assistant."}, {"start": 214.95999999999998, "end": 217.88, "text": " And really, how cool is that?"}, {"start": 217.88, "end": 222.76, "text": " Cloning a human voice from half an hour worth of sound samples."}, {"start": 222.76, "end": 224.56, "text": " What a time to be alive."}, {"start": 224.56, "end": 230.48, "text": " Note that I have been really tough with them, these are some long scholarly sentences that"}, {"start": 230.48, "end": 233.64000000000001, "text": " would give a challenge to any of these algorithms."}, {"start": 233.64000000000001, "end": 240.12, "text": " Now, if you are one of our earlier fellow scholars, you might remember that a few years ago,"}, {"start": 240.12, "end": 246.28, "text": " we talked about an AI technique called Tachotron, which can perform voice cloning from a really"}, {"start": 246.28, "end": 248.52, "text": " short few second-long sample."}, {"start": 248.52, "end": 254.24, "text": " I have only heard of simple shorter sentences for that, and what is new here is that this"}, {"start": 254.24, "end": 260.68, "text": " new technique takes more data, but in return offers higher quality."}, {"start": 260.68, "end": 262.36, "text": " But it doesn't stop there."}, {"start": 262.36, "end": 264.16, "text": " It does even more."}, {"start": 264.16, "end": 270.52, "text": " This new method is easier to train, and it also generalizes to more languages better."}, {"start": 270.52, "end": 276.64, "text": " And these are already good enough to be used in real products, so I wonder what a technique"}, {"start": 276.64, "end": 280.56, "text": " like this will look like just two more papers down the line."}, {"start": 280.56, "end": 286.68, "text": " Maybe my voice here on two-minute papers could be synthesized by an AI."}, {"start": 286.68, "end": 288.72, "text": " Maybe it already is."}, {"start": 288.72, "end": 290.56, "text": " Would that be a good thing?"}, {"start": 290.56, "end": 291.56, "text": " What do you think?"}, {"start": 291.56, "end": 296.84000000000003, "text": " Also a virtual caroy for you to read your daily dose of papers."}, {"start": 296.84000000000003, "end": 303.04, "text": " Hmm, actually, since Nvidia has a great track record of putting these amazing tools into"}, {"start": 303.04, "end": 308.88, "text": " everyone's hands if you are interested, there is already an early access program where"}, {"start": 308.88, "end": 310.04, "text": " you can apply."}, {"start": 310.04, "end": 315.92, "text": " I hope that some of you will be able to try this and let us know in the comments below."}, {"start": 315.92, "end": 319.48, "text": " This episode has been supported by CoHear AI."}, {"start": 319.48, "end": 325.20000000000005, "text": " CoHear builds large language models and makes them available through an API so businesses"}, {"start": 325.20000000000005, "end": 332.0, "text": " can add advanced language understanding to their system or app quickly with just one line"}, {"start": 332.0, "end": 333.24, "text": " of code."}, {"start": 333.24, "end": 339.08000000000004, "text": " You can use your own data, whether it's text from customer service requests, legal contracts,"}, {"start": 339.08, "end": 347.08, "text": " or social media posts to create your own custom models to understand text or even generated."}, {"start": 347.08, "end": 352.2, "text": " For instance, it can be used to automatically determine whether your messages are about"}, {"start": 352.2, "end": 359.68, "text": " your business hours, returns, or shipping, or it can be used to generate a list of possible"}, {"start": 359.68, "end": 363.24, "text": " sentences you can use for your product descriptions."}, {"start": 363.24, "end": 368.96, "text": " Make sure to go to CoHear.ai slash papers or click the link in the video description"}, {"start": 368.96, "end": 371.47999999999996, "text": " and give it a try today."}, {"start": 371.47999999999996, "end": 372.84, "text": " It's super easy to use."}, {"start": 372.84, "end": 401.47999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YxmAQiiHOkA
Google’s Video AI: Outrageously Good! 🤖
❤️ Check out Runway and try it for free here: https://runwayml.com/papers/ Use the code TWOMINUTE at checkout to get 10% off! 📝 The paper "High Definition Video Generation with Diffusion Models" is available here: https://imagen.research.google/video/ 📝 My paper "The flow from simulation to reality" with is available here for free: - Free version: https://rdcu.be/cWPfD - Orig. Nature link - https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 - Teaser 0:15 - Text to image 0:37 - Text to video? 1:07 - It is really here! 1:45 - First example 2:48 - Second example 3:48 - Simulation or reality? 4:20 - Third example 5:08 - How long did this take? 5:48 - Failure cases 6:10 - More beautiful examples 6:21 - Looking under the hood 7:00 - Even more results Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #imagen
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. I cannot believe that this paper is here. This is unbelievable. So, what is going on here? Yes, that's right. We know that these modern AI programs can paint images for us. Anything we wish, but today we are going to find out whether they can also do it with video. You see an example here... And here are these also made by an AI? Well, I'll tell you in a moment. So, video. That sounds impossible. That is so much harder. You see, videos require a much greater understanding of the world around us, so much more computation, and my favorite, temporal coherence. What is that? This means that a video is not just a set of images, but a series of images that have to relate to each other. If the AI does not do a good job at this, we get this flickering. So, as all of this is so hard, we will be able to do this maybe in 5-10 years or maybe never. Well, scientists at Google say not so fast. Now, hold onto your papers and have a look at this. Oh my goodness. Is it really here? I am utterly shocked, but the answer is yes. Yes it is. So now, let's have a look at 3 of my favorite examples, and then I'll tell you how much time this took. By the way, it is an almost unfathomably short time. Now one, the concept is the same. One simple text prompt goes in, for instance, a happy elephant wearing a birthday hat, walking under the sea, and this comes out. Wow, look at that. That is exactly what we were asking for in the prompt, plus as I am a light transport researcher by trade, I am also looking at the waves and the sky through the sea, which is absolutely beautiful. But it doesn't stop there. I also see every light transport researcher's dream there. Water, caustics, look at these gorgeous patterns. Now not even this technique is perfect, you see that temporal coherence is still subject to improvement, the video still flickers a tiny bit, and the task is also changing over time. However, this is incredible progress in so little time. Absolutely amazing. Two, in good two minute papers fashion, now let's ask for a bit of physics, a bunch of autumn leaves falling on a calm lake, forming the text, image and video. I love it. You see, in computer graphics, creating a simulation like this would take quite a bit of 3D modeling knowledge, and then we also have to fire up a fluid simulation. This does not seem to do a great deal of two way coupling, which means that the water hasn't affect on the leaves, you see it at acting this leaf here, but the leaves do not seem to have a huge effect on the water itself. This is possible with specialized computer graphics algorithms, like this one, and I bet it will also be possible with image and video version 2. Now I am super happy to see the reflections of the leaves appearing on the water. Good job little AI, and to think that this is just the first iteration of image and video, wow! By the way, if you wish to see how detailed a real physics simulation can be, make sure to check out my Nature Physics comment paper in the video description. Spoiler alert, the surprising answer is that they can be almost as detailed as real life. I was also very happy with this splash, and with this turquoise liquid movement in the glass too. Great simulations on version 1. I am so happy. Now 3, give me a teddy bear doing the dishes. Whoa! Is this real? Yes it is. It really feels like we are living inside a science fiction movie. Now it's not perfect, you can see that it is a little confused by the interaction of these objects, but if someone told me just a few weeks ago that an AI would be able to do this, I wouldn't have believed a word of it. And it not only has a really good understanding of reality, but it can also combine two previous concepts, a teddy bear and washing the dishes into something new. My goodness, I love it. Now while we look at some more beautiful results, we know that this is incredible progress in so little time. But how little exactly? Well, if you have been holding onto your paper so far, now squeeze that paper because OpenAI's Dolly2 text to image AI appeared in April 2022, Google's image also text to image appears one month later, May 2022, that is incredible. And get this only 5 months later by October 2022, we get this. An amazing text to video AI. I am out of words. Of course, it is not perfect. The hair of pets is typically still a problem and the complexity of this ship battle is still a little too much for it to shoulder, so version 1 is not going to make a new Pirates of the Caribbean movie, but maybe version 3, 2 more papers down the line, who knows. Ah yes, about that. The resolution of these videos is not too bad at all. It is in 720p, the literature likes to call it high definition. These are not 4k, like many of the shows you can watch on your TV today, but this quality for a first crack at the problem is simply stunning. And don't forget that first it synthesizes a low resolution video, then abscales it through super resolution, something Google is already really good at, so I would not be surprised for version 2 to easily go to full HD and maybe even beyond. As you see, the pace of progress in AI research is nothing short of amazing. And if like me, you are yearning for some more results, you can check out the papers website in the video description, where as of the making of this video, you get a random selection of results. Refresh it a couple times and see if you get something new. And if I could somehow get access to this technique, you bet that I'd be generating a ton more of these. Update I cannot make any promises, but good news we are already working on it. A video of a scholar reading exploding papers absolutely needs to happen. Make sure to subscribe and hit the bell icon to not miss it in case it happens. You really don't want to miss that. So from now on, if you are wondering what a wooden figurine surfing in outer space looks like, you need to look no further. What a time to be alive. So what do you think? Does this get your mind going? What would you use this for? Let me know in the comments below. This episode has been supported by Ranway, Professional and Magical AI video editing for everyone. And often here you follow scholars asking, OK, these AI techniques look great, but when do I get to use them? And the answer is, right now, Ranway is an amazing video editor that can do many of the things that you see here in this series. For instance, it can automatically replace the background behind the person. It can do in painting for videos amazingly well. It can do even text to image, image to image, you name it. No wonder it is used by editors, post production teams and creators at companies like CBS, Google, Vox and many other. Make sure to go to RanwayML.com, slash papers, sign up and try it for free today. And here comes the best part. Use the code 2 minute at checkout and get 10% off your first month. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 4.64, "end": 8.0, "text": " I cannot believe that this paper is here."}, {"start": 8.0, "end": 10.24, "text": " This is unbelievable."}, {"start": 10.24, "end": 12.48, "text": " So, what is going on here?"}, {"start": 12.48, "end": 14.0, "text": " Yes, that's right."}, {"start": 14.0, "end": 18.48, "text": " We know that these modern AI programs can paint images for us."}, {"start": 18.48, "end": 26.080000000000002, "text": " Anything we wish, but today we are going to find out whether they can also do it with video."}, {"start": 26.080000000000002, "end": 28.0, "text": " You see an example here..."}, {"start": 28.0, "end": 33.12, "text": " And here are these also made by an AI?"}, {"start": 33.12, "end": 35.76, "text": " Well, I'll tell you in a moment."}, {"start": 35.76, "end": 37.6, "text": " So, video."}, {"start": 37.6, "end": 39.6, "text": " That sounds impossible."}, {"start": 39.6, "end": 41.76, "text": " That is so much harder."}, {"start": 41.76, "end": 48.8, "text": " You see, videos require a much greater understanding of the world around us, so much more computation,"}, {"start": 48.8, "end": 51.84, "text": " and my favorite, temporal coherence."}, {"start": 51.84, "end": 53.6, "text": " What is that?"}, {"start": 53.6, "end": 60.72, "text": " This means that a video is not just a set of images, but a series of images that have to relate"}, {"start": 60.72, "end": 61.72, "text": " to each other."}, {"start": 61.72, "end": 67.52, "text": " If the AI does not do a good job at this, we get this flickering."}, {"start": 67.52, "end": 76.0, "text": " So, as all of this is so hard, we will be able to do this maybe in 5-10 years or maybe"}, {"start": 76.0, "end": 77.0, "text": " never."}, {"start": 77.0, "end": 80.8, "text": " Well, scientists at Google say not so fast."}, {"start": 80.8, "end": 85.12, "text": " Now, hold onto your papers and have a look at this."}, {"start": 85.12, "end": 87.44, "text": " Oh my goodness."}, {"start": 87.44, "end": 88.64, "text": " Is it really here?"}, {"start": 88.64, "end": 92.39999999999999, "text": " I am utterly shocked, but the answer is yes."}, {"start": 92.39999999999999, "end": 93.6, "text": " Yes it is."}, {"start": 93.6, "end": 99.88, "text": " So now, let's have a look at 3 of my favorite examples, and then I'll tell you how much"}, {"start": 99.88, "end": 101.56, "text": " time this took."}, {"start": 101.56, "end": 105.92, "text": " By the way, it is an almost unfathomably short time."}, {"start": 105.92, "end": 109.4, "text": " Now one, the concept is the same."}, {"start": 109.4, "end": 115.68, "text": " One simple text prompt goes in, for instance, a happy elephant wearing a birthday hat,"}, {"start": 115.68, "end": 119.4, "text": " walking under the sea, and this comes out."}, {"start": 119.4, "end": 121.96000000000001, "text": " Wow, look at that."}, {"start": 121.96000000000001, "end": 128.04000000000002, "text": " That is exactly what we were asking for in the prompt, plus as I am a light transport researcher"}, {"start": 128.04000000000002, "end": 135.12, "text": " by trade, I am also looking at the waves and the sky through the sea, which is absolutely"}, {"start": 135.12, "end": 136.12, "text": " beautiful."}, {"start": 136.12, "end": 138.32, "text": " But it doesn't stop there."}, {"start": 138.32, "end": 142.84, "text": " I also see every light transport researcher's dream there."}, {"start": 142.84, "end": 146.56, "text": " Water, caustics, look at these gorgeous patterns."}, {"start": 146.56, "end": 152.48, "text": " Now not even this technique is perfect, you see that temporal coherence is still subject"}, {"start": 152.48, "end": 159.56, "text": " to improvement, the video still flickers a tiny bit, and the task is also changing over"}, {"start": 159.56, "end": 160.56, "text": " time."}, {"start": 160.56, "end": 165.48, "text": " However, this is incredible progress in so little time."}, {"start": 165.48, "end": 166.48, "text": " Absolutely amazing."}, {"start": 166.48, "end": 173.32, "text": " Two, in good two minute papers fashion, now let's ask for a bit of physics, a bunch of"}, {"start": 173.32, "end": 178.64, "text": " autumn leaves falling on a calm lake, forming the text, image and video."}, {"start": 178.64, "end": 180.07999999999998, "text": " I love it."}, {"start": 180.07999999999998, "end": 186.64, "text": " You see, in computer graphics, creating a simulation like this would take quite a bit of 3D modeling"}, {"start": 186.64, "end": 191.6, "text": " knowledge, and then we also have to fire up a fluid simulation."}, {"start": 191.6, "end": 196.56, "text": " This does not seem to do a great deal of two way coupling, which means that the water hasn't"}, {"start": 196.56, "end": 203.0, "text": " affect on the leaves, you see it at acting this leaf here, but the leaves do not seem to"}, {"start": 203.0, "end": 206.2, "text": " have a huge effect on the water itself."}, {"start": 206.2, "end": 211.68, "text": " This is possible with specialized computer graphics algorithms, like this one, and I bet"}, {"start": 211.68, "end": 216.28, "text": " it will also be possible with image and video version 2."}, {"start": 216.28, "end": 222.4, "text": " Now I am super happy to see the reflections of the leaves appearing on the water."}, {"start": 222.4, "end": 229.4, "text": " Good job little AI, and to think that this is just the first iteration of image and video,"}, {"start": 229.4, "end": 230.4, "text": " wow!"}, {"start": 230.4, "end": 235.96, "text": " By the way, if you wish to see how detailed a real physics simulation can be, make sure"}, {"start": 235.96, "end": 240.76, "text": " to check out my Nature Physics comment paper in the video description."}, {"start": 240.76, "end": 247.0, "text": " Spoiler alert, the surprising answer is that they can be almost as detailed as real"}, {"start": 247.0, "end": 248.0, "text": " life."}, {"start": 248.0, "end": 254.12, "text": " I was also very happy with this splash, and with this turquoise liquid movement in the glass"}, {"start": 254.12, "end": 255.12, "text": " too."}, {"start": 255.12, "end": 257.68, "text": " Great simulations on version 1."}, {"start": 257.68, "end": 259.71999999999997, "text": " I am so happy."}, {"start": 259.71999999999997, "end": 264.24, "text": " Now 3, give me a teddy bear doing the dishes."}, {"start": 264.24, "end": 265.24, "text": " Whoa!"}, {"start": 265.24, "end": 266.56, "text": " Is this real?"}, {"start": 266.56, "end": 267.56, "text": " Yes it is."}, {"start": 267.56, "end": 272.52, "text": " It really feels like we are living inside a science fiction movie."}, {"start": 272.52, "end": 278.4, "text": " Now it's not perfect, you can see that it is a little confused by the interaction of"}, {"start": 278.4, "end": 284.76, "text": " these objects, but if someone told me just a few weeks ago that an AI would be able to"}, {"start": 284.76, "end": 288.8, "text": " do this, I wouldn't have believed a word of it."}, {"start": 288.8, "end": 295.68, "text": " And it not only has a really good understanding of reality, but it can also combine two previous"}, {"start": 295.68, "end": 301.6, "text": " concepts, a teddy bear and washing the dishes into something new."}, {"start": 301.6, "end": 304.72, "text": " My goodness, I love it."}, {"start": 304.72, "end": 311.08, "text": " Now while we look at some more beautiful results, we know that this is incredible progress"}, {"start": 311.08, "end": 313.4, "text": " in so little time."}, {"start": 313.4, "end": 315.24, "text": " But how little exactly?"}, {"start": 315.24, "end": 321.2, "text": " Well, if you have been holding onto your paper so far, now squeeze that paper because"}, {"start": 321.2, "end": 329.76, "text": " OpenAI's Dolly2 text to image AI appeared in April 2022, Google's image also text to"}, {"start": 329.76, "end": 336.52, "text": " image appears one month later, May 2022, that is incredible."}, {"start": 336.52, "end": 343.76, "text": " And get this only 5 months later by October 2022, we get this."}, {"start": 343.76, "end": 346.88, "text": " An amazing text to video AI."}, {"start": 346.88, "end": 349.2, "text": " I am out of words."}, {"start": 349.2, "end": 351.12, "text": " Of course, it is not perfect."}, {"start": 351.12, "end": 357.84, "text": " The hair of pets is typically still a problem and the complexity of this ship battle is still"}, {"start": 357.84, "end": 363.56, "text": " a little too much for it to shoulder, so version 1 is not going to make a new Pirates of the"}, {"start": 363.56, "end": 370.24, "text": " Caribbean movie, but maybe version 3, 2 more papers down the line, who knows."}, {"start": 370.24, "end": 372.15999999999997, "text": " Ah yes, about that."}, {"start": 372.15999999999997, "end": 375.68, "text": " The resolution of these videos is not too bad at all."}, {"start": 375.68, "end": 381.32, "text": " It is in 720p, the literature likes to call it high definition."}, {"start": 381.32, "end": 387.32, "text": " These are not 4k, like many of the shows you can watch on your TV today, but this quality"}, {"start": 387.32, "end": 391.36, "text": " for a first crack at the problem is simply stunning."}, {"start": 391.36, "end": 397.96000000000004, "text": " And don't forget that first it synthesizes a low resolution video, then abscales it through"}, {"start": 397.96000000000004, "end": 404.24, "text": " super resolution, something Google is already really good at, so I would not be surprised"}, {"start": 404.24, "end": 410.72, "text": " for version 2 to easily go to full HD and maybe even beyond."}, {"start": 410.72, "end": 415.92, "text": " As you see, the pace of progress in AI research is nothing short of amazing."}, {"start": 415.92, "end": 421.24, "text": " And if like me, you are yearning for some more results, you can check out the papers website"}, {"start": 421.24, "end": 427.0, "text": " in the video description, where as of the making of this video, you get a random selection"}, {"start": 427.0, "end": 428.0, "text": " of results."}, {"start": 428.0, "end": 432.84000000000003, "text": " Refresh it a couple times and see if you get something new."}, {"start": 432.84, "end": 438.08, "text": " And if I could somehow get access to this technique, you bet that I'd be generating"}, {"start": 438.08, "end": 439.88, "text": " a ton more of these."}, {"start": 439.88, "end": 445.79999999999995, "text": " Update I cannot make any promises, but good news we are already working on it."}, {"start": 445.79999999999995, "end": 451.84, "text": " A video of a scholar reading exploding papers absolutely needs to happen."}, {"start": 451.84, "end": 456.91999999999996, "text": " Make sure to subscribe and hit the bell icon to not miss it in case it happens."}, {"start": 456.91999999999996, "end": 458.84, "text": " You really don't want to miss that."}, {"start": 458.84, "end": 464.79999999999995, "text": " So from now on, if you are wondering what a wooden figurine surfing in outer space looks"}, {"start": 464.79999999999995, "end": 467.79999999999995, "text": " like, you need to look no further."}, {"start": 467.79999999999995, "end": 469.59999999999997, "text": " What a time to be alive."}, {"start": 469.59999999999997, "end": 471.35999999999996, "text": " So what do you think?"}, {"start": 471.35999999999996, "end": 472.96, "text": " Does this get your mind going?"}, {"start": 472.96, "end": 474.84, "text": " What would you use this for?"}, {"start": 474.84, "end": 476.64, "text": " Let me know in the comments below."}, {"start": 476.64, "end": 483.12, "text": " This episode has been supported by Ranway, Professional and Magical AI video editing for"}, {"start": 483.12, "end": 484.12, "text": " everyone."}, {"start": 484.12, "end": 490.72, "text": " And often here you follow scholars asking, OK, these AI techniques look great, but when"}, {"start": 490.72, "end": 492.44, "text": " do I get to use them?"}, {"start": 492.44, "end": 498.44, "text": " And the answer is, right now, Ranway is an amazing video editor that can do many of the"}, {"start": 498.44, "end": 501.32, "text": " things that you see here in this series."}, {"start": 501.32, "end": 506.48, "text": " For instance, it can automatically replace the background behind the person."}, {"start": 506.48, "end": 510.84000000000003, "text": " It can do in painting for videos amazingly well."}, {"start": 510.84, "end": 515.64, "text": " It can do even text to image, image to image, you name it."}, {"start": 515.64, "end": 523.0799999999999, "text": " No wonder it is used by editors, post production teams and creators at companies like CBS, Google,"}, {"start": 523.0799999999999, "end": 525.28, "text": " Vox and many other."}, {"start": 525.28, "end": 532.68, "text": " Make sure to go to RanwayML.com, slash papers, sign up and try it for free today."}, {"start": 532.68, "end": 534.76, "text": " And here comes the best part."}, {"start": 534.76, "end": 540.24, "text": " Use the code 2 minute at checkout and get 10% off your first month."}, {"start": 540.24, "end": 542.24, "text": " Thanks for watching and for your generous support."}, {"start": 542.24, "end": 571.24, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Ybk8hxKeMYQ
Google’s New Robot: Don't Mess With This Guy! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Inner Monologue: Embodied Reasoning through Planning with Language Models" is available here: https://innermonologue.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #google #ai
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to give a really hard time to a robot. Look at this and this. You see, Google's robots are getting smarter and smarter every year. For instance, we talked about this robot assistant where we can say, please help me, I have spilled the coke and then it creates a cunning plan. It tries to throw out the coke and, well, almost. Then it brings us a sponge. And if we've been reading research papers all day and we feel a little tired, if we tell it, it can bring us a water bottle, hand it to us, and can even bring us an apple. A different robot of theirs has learned to drive itself, also understands English, and can thus take instructions from us and find its way around. And these instructions can be really nasty, like this one. However, there is one thing that neither of these previous robots can do, but this new one can. Let's see what it is. First, let's ask for a soda again, and once again, it comes up with a cunning plan. Go to the coke can, pick it up, and checkmate little robot. Now comes the coolest part. It realizes that it has failed, and now change of plans. It says the following. There is no coke, but there is an orange soda. Is that okay with us? No, no, we are going to be a little picky here, and say no, and ask for a lime soda instead. The robot probably thinks, oh goodness, a change of plans again, let's look for that lime soda. And it is, of course, really far away to give it a hard time, so let's see what it does. Wow, look at that. It found it in the other end of the room, recognized that this is indeed the soda, picks it up, and we are done. So cool, the amenities at the Google headquarters are truly next level. I love it. This was super fun, so you know what? Let's try another task. Let's ask it to put the coke can into the top drawer. Will it succeed? Well, look at that. The human operator cannot wait to mess with this little robot, and, aha, sure enough, that drawer is not opening. So is this a problem? Well, the robot recognized that this was not successful, but now the environment has changed, the pesky human is gone, so it tries again. And this time the drawer opens, in goes the coke can, it holds the drawer with both fingers, and now I just need a gentle little push and bravo, good job. We tried to confuse this robot by messing up the environment, and it did really well, but now get this. What if we mess with its brain instead? How? Well, check this out. Let's ask it to throw away the snack on the counter. It asks which one and to which our answer is, of course, not the apple. No, no, let's mess with it a little more. Our answer is that we changed our mind, so please throw away something on the table instead. Okay, now as it approaches the table, let's try to mess with it again. You know what? Never mind, just finish the previous task instead. And it finally goes there and grabs the apple. We tried really hard, but we could not mess with this guy. So cool. So, a new robot that understands English can move around, make a plan, and most importantly, it can come up with a plan B when plan A fails. Wow, a little personal assistant. The pace of progress in AR research is nothing short of amazing. What a time to be alive. So what would you use this for? Let me know in the comments below. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance and hold on to your papers because with Lambda GPU cloud you can get on demand A 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 9.64, "text": " Today we are going to give a really hard time to a robot."}, {"start": 9.64, "end": 12.76, "text": " Look at this and this."}, {"start": 12.76, "end": 17.96, "text": " You see, Google's robots are getting smarter and smarter every year."}, {"start": 17.96, "end": 23.78, "text": " For instance, we talked about this robot assistant where we can say, please help me, I have"}, {"start": 23.78, "end": 28.400000000000002, "text": " spilled the coke and then it creates a cunning plan."}, {"start": 28.4, "end": 32.96, "text": " It tries to throw out the coke and, well, almost."}, {"start": 32.96, "end": 37.08, "text": " Then it brings us a sponge."}, {"start": 37.08, "end": 43.239999999999995, "text": " And if we've been reading research papers all day and we feel a little tired, if we"}, {"start": 43.239999999999995, "end": 50.16, "text": " tell it, it can bring us a water bottle, hand it to us, and can even bring us an apple."}, {"start": 50.16, "end": 56.76, "text": " A different robot of theirs has learned to drive itself, also understands English, and"}, {"start": 56.76, "end": 62.04, "text": " can thus take instructions from us and find its way around."}, {"start": 62.04, "end": 66.48, "text": " And these instructions can be really nasty, like this one."}, {"start": 66.48, "end": 72.8, "text": " However, there is one thing that neither of these previous robots can do, but this new"}, {"start": 72.8, "end": 73.8, "text": " one can."}, {"start": 73.8, "end": 75.52, "text": " Let's see what it is."}, {"start": 75.52, "end": 83.12, "text": " First, let's ask for a soda again, and once again, it comes up with a cunning plan."}, {"start": 83.12, "end": 89.32000000000001, "text": " Go to the coke can, pick it up, and checkmate little robot."}, {"start": 89.32000000000001, "end": 91.52000000000001, "text": " Now comes the coolest part."}, {"start": 91.52000000000001, "end": 96.04, "text": " It realizes that it has failed, and now change of plans."}, {"start": 96.04, "end": 97.84, "text": " It says the following."}, {"start": 97.84, "end": 102.44, "text": " There is no coke, but there is an orange soda."}, {"start": 102.44, "end": 104.08000000000001, "text": " Is that okay with us?"}, {"start": 104.08000000000001, "end": 112.0, "text": " No, no, we are going to be a little picky here, and say no, and ask for a lime soda instead."}, {"start": 112.0, "end": 118.2, "text": " The robot probably thinks, oh goodness, a change of plans again, let's look for that lime"}, {"start": 118.2, "end": 119.2, "text": " soda."}, {"start": 119.2, "end": 126.32, "text": " And it is, of course, really far away to give it a hard time, so let's see what it does."}, {"start": 126.32, "end": 129.12, "text": " Wow, look at that."}, {"start": 129.12, "end": 136.12, "text": " It found it in the other end of the room, recognized that this is indeed the soda, picks it up,"}, {"start": 136.12, "end": 138.12, "text": " and we are done."}, {"start": 138.12, "end": 143.76, "text": " So cool, the amenities at the Google headquarters are truly next level."}, {"start": 143.76, "end": 145.56, "text": " I love it."}, {"start": 145.56, "end": 148.68, "text": " This was super fun, so you know what?"}, {"start": 148.68, "end": 150.52, "text": " Let's try another task."}, {"start": 150.52, "end": 154.72, "text": " Let's ask it to put the coke can into the top drawer."}, {"start": 154.72, "end": 156.16, "text": " Will it succeed?"}, {"start": 156.16, "end": 158.36, "text": " Well, look at that."}, {"start": 158.36, "end": 166.16, "text": " The human operator cannot wait to mess with this little robot, and, aha, sure enough,"}, {"start": 166.16, "end": 168.72, "text": " that drawer is not opening."}, {"start": 168.72, "end": 170.48, "text": " So is this a problem?"}, {"start": 170.48, "end": 178.0, "text": " Well, the robot recognized that this was not successful, but now the environment has changed,"}, {"start": 178.0, "end": 182.32, "text": " the pesky human is gone, so it tries again."}, {"start": 182.32, "end": 190.0, "text": " And this time the drawer opens, in goes the coke can, it holds the drawer with both fingers,"}, {"start": 190.0, "end": 197.0, "text": " and now I just need a gentle little push and bravo, good job."}, {"start": 197.0, "end": 203.12, "text": " We tried to confuse this robot by messing up the environment, and it did really well,"}, {"start": 203.12, "end": 205.12, "text": " but now get this."}, {"start": 205.12, "end": 207.96, "text": " What if we mess with its brain instead?"}, {"start": 207.96, "end": 208.96, "text": " How?"}, {"start": 208.96, "end": 210.6, "text": " Well, check this out."}, {"start": 210.6, "end": 214.2, "text": " Let's ask it to throw away the snack on the counter."}, {"start": 214.2, "end": 220.16, "text": " It asks which one and to which our answer is, of course, not the apple."}, {"start": 220.16, "end": 223.28, "text": " No, no, let's mess with it a little more."}, {"start": 223.28, "end": 230.51999999999998, "text": " Our answer is that we changed our mind, so please throw away something on the table instead."}, {"start": 230.51999999999998, "end": 235.88, "text": " Okay, now as it approaches the table, let's try to mess with it again."}, {"start": 235.88, "end": 236.88, "text": " You know what?"}, {"start": 236.88, "end": 240.64, "text": " Never mind, just finish the previous task instead."}, {"start": 240.64, "end": 244.12, "text": " And it finally goes there and grabs the apple."}, {"start": 244.12, "end": 248.32, "text": " We tried really hard, but we could not mess with this guy."}, {"start": 248.32, "end": 249.32, "text": " So cool."}, {"start": 249.32, "end": 257.2, "text": " So, a new robot that understands English can move around, make a plan, and most importantly,"}, {"start": 257.2, "end": 261.72, "text": " it can come up with a plan B when plan A fails."}, {"start": 261.72, "end": 265.04, "text": " Wow, a little personal assistant."}, {"start": 265.04, "end": 269.4, "text": " The pace of progress in AR research is nothing short of amazing."}, {"start": 269.4, "end": 271.28000000000003, "text": " What a time to be alive."}, {"start": 271.28000000000003, "end": 273.84000000000003, "text": " So what would you use this for?"}, {"start": 273.84, "end": 275.47999999999996, "text": " Let me know in the comments below."}, {"start": 275.47999999999996, "end": 282.44, "text": " If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices"}, {"start": 282.44, "end": 285.91999999999996, "text": " in the world for GPU cloud compute."}, {"start": 285.91999999999996, "end": 288.84, "text": " No commitments or negotiation required."}, {"start": 288.84, "end": 295.44, "text": " Just sign up and launch an instance and hold on to your papers because with Lambda GPU"}, {"start": 295.44, "end": 304.84, "text": " cloud you can get on demand A 100 instances for $1.10 per hour versus $4.10 per hour with"}, {"start": 304.84, "end": 305.84, "text": " AWS."}, {"start": 305.84, "end": 309.04, "text": " That's 73% savings."}, {"start": 309.04, "end": 312.52, "text": " Did I mention they also offer persistent storage?"}, {"start": 312.52, "end": 320.76, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances,"}, {"start": 320.76, "end": 323.08, "text": " workstations or servers."}, {"start": 323.08, "end": 330.08, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 330.08, "end": 331.08, "text": " today."}, {"start": 331.08, "end": 359.08, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=VxbTiuabW0k
Intel’s New AI: Amazing Ray Tracing Results! ☀️
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "Temporally Stable Real-Time Joint Neural Denoising and Supersampling" is available here: https://www.intel.com/content/www/us/en/developer/articles/technical/temporally-stable-denoising-and-supersampling.html 📝 Our earlier paper with the spheres scene that took 3 weeks: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. This is my happy episode. Why is that? Well, of course, because today we are talking about light transport simulations and in particular Intel's amazing new technique that can take this and make it into this. Wow, it can also take this and make it into this. My goodness, this is amazing. But wait a second, what is going on here? What are these noisy videos for? And why? Well, if we wish to create a truly gorgeous photorealistic scene in computer graphics, we usually reach out to a light transport simulation algorithm and then this happens. Oh no, we have noise. Tons of it. But why? Well, during the simulation we have to shoot millions and millions of light rays into the scene to estimate how much light is bouncing around and before we have simulated enough rays, the inaccuracies in our estimations show up as noise in these images. And this clears up over time, but it may take a long time. How do we know that? Well, have a look at the reference simulation footage for this paper. See, there is still some noise in here. I am sure this would clean up over time, but no one said that it would do so quickly. A video like this might still require hours to days to compute. For instance, this is from a previous paper that took three weeks to finish and it ran on multiple computers at the same time. So is all whole past for these beautiful photorealistic simulations? Well, not quite. Instead of waiting for hours or days, what if I told you that we can just wait for a small fraction of a second, about 10 milliseconds, and it will produce this. And then run a previous noise filtering technique that is specifically tailored for light transport simulations. And what do we get? Probably not much, right? I can barely tell what I should be seeing here. So let's see a previous method. Whoa, that is way better. I was barely able to guess what these are, but now we know. Great things. Great. So we don't have to wait for hours, today's for a simulated world to come alive in a video like this, just a few milliseconds, at least for the simulation, we don't know how long the noise filtering takes. And now hold on to your papers, because this was not today's papers result, I hope this one can do even better. And look, instead it can do this. Wow. This is so much better. And the result of the reference simulation for comparison, this is the one that takes forever to compute. Let's also have a look at the videos and compare them. This is the noisy input simulation. Wow. This is going to be hard. Now, the previous method. Yes, this is clearly better, but there is a problem. Do you see the problem? Oh yes, it smoothed out the noise, but it smoothed out the details too. Hence a lot of them are lost. So let's see what Intel's new method can do instead. Now we're talking. So much better. I absolutely love it. It is still not as sharp as the reference simulation, however, in some regions, depending on your taste, it might even be more pleasing to the eye than this reference. And it gets better. This technique does not only denoising, but upsampling too. This means that it is able to create a higher resolution image with more pixels than the input footage. Now get ready, one more comparison, and I'll tell you how long the noise filtering took. I wonder what it will do with this noisy mess. I have no idea what is going on here. And neither does this previous technique. And this is not some ancient technique. This previous method is the neural bilateral grid, a learning based method from just two years ago. And now have a look at this. My goodness, is this really possible? So much progress, just one more paper down the line. I absolutely love it. So good. So how long do we have to wait for an image like this? Still hours to days? Well, not at all. This orans not only in real time, it runs faster than real time. Yes, that means about 200 frames per second for the new noise filtering step. And remember, the light simulation part typically takes 4 to 12 milliseconds on these scenes. This is the noisy mess that we get. And just 5 milliseconds later we get this. I cannot believe it. Bravo. So real time light transport simulations from now on. Oh yes, sign me up right now. What a time to be alive. So what do you think? Let me know in the comments below. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit wmb.me slash paper forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 5.0, "end": 7.0, "text": " This is my happy episode."}, {"start": 7.0, "end": 8.0, "text": " Why is that?"}, {"start": 8.0, "end": 16.0, "text": " Well, of course, because today we are talking about light transport simulations and in particular"}, {"start": 16.0, "end": 23.0, "text": " Intel's amazing new technique that can take this and make it into this."}, {"start": 23.0, "end": 29.0, "text": " Wow, it can also take this and make it into this."}, {"start": 29.0, "end": 33.0, "text": " My goodness, this is amazing."}, {"start": 33.0, "end": 36.0, "text": " But wait a second, what is going on here?"}, {"start": 36.0, "end": 38.0, "text": " What are these noisy videos for?"}, {"start": 38.0, "end": 39.0, "text": " And why?"}, {"start": 39.0, "end": 46.5, "text": " Well, if we wish to create a truly gorgeous photorealistic scene in computer graphics, we usually"}, {"start": 46.5, "end": 52.0, "text": " reach out to a light transport simulation algorithm and then this happens."}, {"start": 52.0, "end": 55.0, "text": " Oh no, we have noise."}, {"start": 55.0, "end": 57.0, "text": " Tons of it."}, {"start": 57.0, "end": 58.0, "text": " But why?"}, {"start": 58.0, "end": 64.1, "text": " Well, during the simulation we have to shoot millions and millions of light rays into the"}, {"start": 64.1, "end": 70.62, "text": " scene to estimate how much light is bouncing around and before we have simulated enough"}, {"start": 70.62, "end": 76.96000000000001, "text": " rays, the inaccuracies in our estimations show up as noise in these images."}, {"start": 76.96000000000001, "end": 82.44, "text": " And this clears up over time, but it may take a long time."}, {"start": 82.44, "end": 83.96000000000001, "text": " How do we know that?"}, {"start": 83.96, "end": 87.96, "text": " Well, have a look at the reference simulation footage for this paper."}, {"start": 87.96, "end": 91.88, "text": " See, there is still some noise in here."}, {"start": 91.88, "end": 98.67999999999999, "text": " I am sure this would clean up over time, but no one said that it would do so quickly."}, {"start": 98.67999999999999, "end": 103.72, "text": " A video like this might still require hours to days to compute."}, {"start": 103.72, "end": 110.72, "text": " For instance, this is from a previous paper that took three weeks to finish and it ran"}, {"start": 110.72, "end": 114.12, "text": " on multiple computers at the same time."}, {"start": 114.12, "end": 119.28, "text": " So is all whole past for these beautiful photorealistic simulations?"}, {"start": 119.28, "end": 121.52, "text": " Well, not quite."}, {"start": 121.52, "end": 127.48, "text": " Instead of waiting for hours or days, what if I told you that we can just wait for a small"}, {"start": 127.48, "end": 134.12, "text": " fraction of a second, about 10 milliseconds, and it will produce this."}, {"start": 134.12, "end": 140.28, "text": " And then run a previous noise filtering technique that is specifically tailored for light"}, {"start": 140.28, "end": 142.24, "text": " transport simulations."}, {"start": 142.24, "end": 143.92, "text": " And what do we get?"}, {"start": 143.92, "end": 146.16, "text": " Probably not much, right?"}, {"start": 146.16, "end": 149.44, "text": " I can barely tell what I should be seeing here."}, {"start": 149.44, "end": 152.08, "text": " So let's see a previous method."}, {"start": 152.08, "end": 155.56, "text": " Whoa, that is way better."}, {"start": 155.56, "end": 159.76, "text": " I was barely able to guess what these are, but now we know."}, {"start": 159.76, "end": 160.76, "text": " Great things."}, {"start": 160.76, "end": 161.76, "text": " Great."}, {"start": 161.76, "end": 167.64, "text": " So we don't have to wait for hours, today's for a simulated world to come alive in a video"}, {"start": 167.64, "end": 174.55999999999997, "text": " like this, just a few milliseconds, at least for the simulation, we don't know how long"}, {"start": 174.55999999999997, "end": 176.44, "text": " the noise filtering takes."}, {"start": 176.44, "end": 182.2, "text": " And now hold on to your papers, because this was not today's papers result, I hope this"}, {"start": 182.2, "end": 185.07999999999998, "text": " one can do even better."}, {"start": 185.07999999999998, "end": 188.48, "text": " And look, instead it can do this."}, {"start": 188.48, "end": 190.2, "text": " Wow."}, {"start": 190.2, "end": 192.64, "text": " This is so much better."}, {"start": 192.64, "end": 197.35999999999999, "text": " And the result of the reference simulation for comparison, this is the one that takes"}, {"start": 197.36, "end": 199.0, "text": " forever to compute."}, {"start": 199.0, "end": 203.32000000000002, "text": " Let's also have a look at the videos and compare them."}, {"start": 203.32000000000002, "end": 205.88000000000002, "text": " This is the noisy input simulation."}, {"start": 205.88000000000002, "end": 207.4, "text": " Wow."}, {"start": 207.4, "end": 209.16000000000003, "text": " This is going to be hard."}, {"start": 209.16000000000003, "end": 211.20000000000002, "text": " Now, the previous method."}, {"start": 211.20000000000002, "end": 216.32000000000002, "text": " Yes, this is clearly better, but there is a problem."}, {"start": 216.32000000000002, "end": 217.72000000000003, "text": " Do you see the problem?"}, {"start": 217.72000000000003, "end": 224.44000000000003, "text": " Oh yes, it smoothed out the noise, but it smoothed out the details too."}, {"start": 224.44000000000003, "end": 227.04000000000002, "text": " Hence a lot of them are lost."}, {"start": 227.04, "end": 232.23999999999998, "text": " So let's see what Intel's new method can do instead."}, {"start": 232.23999999999998, "end": 233.79999999999998, "text": " Now we're talking."}, {"start": 233.79999999999998, "end": 234.88, "text": " So much better."}, {"start": 234.88, "end": 237.64, "text": " I absolutely love it."}, {"start": 237.64, "end": 243.32, "text": " It is still not as sharp as the reference simulation, however, in some regions, depending"}, {"start": 243.32, "end": 248.88, "text": " on your taste, it might even be more pleasing to the eye than this reference."}, {"start": 248.88, "end": 250.28, "text": " And it gets better."}, {"start": 250.28, "end": 255.39999999999998, "text": " This technique does not only denoising, but upsampling too."}, {"start": 255.4, "end": 261.08, "text": " This means that it is able to create a higher resolution image with more pixels than the"}, {"start": 261.08, "end": 262.48, "text": " input footage."}, {"start": 262.48, "end": 269.36, "text": " Now get ready, one more comparison, and I'll tell you how long the noise filtering took."}, {"start": 269.36, "end": 273.2, "text": " I wonder what it will do with this noisy mess."}, {"start": 273.2, "end": 276.56, "text": " I have no idea what is going on here."}, {"start": 276.56, "end": 279.52, "text": " And neither does this previous technique."}, {"start": 279.52, "end": 282.0, "text": " And this is not some ancient technique."}, {"start": 282.0, "end": 287.56, "text": " This previous method is the neural bilateral grid, a learning based method from just two"}, {"start": 287.56, "end": 288.76, "text": " years ago."}, {"start": 288.76, "end": 291.68, "text": " And now have a look at this."}, {"start": 291.68, "end": 294.84, "text": " My goodness, is this really possible?"}, {"start": 294.84, "end": 298.48, "text": " So much progress, just one more paper down the line."}, {"start": 298.48, "end": 301.36, "text": " I absolutely love it."}, {"start": 301.36, "end": 302.36, "text": " So good."}, {"start": 302.36, "end": 306.36, "text": " So how long do we have to wait for an image like this?"}, {"start": 306.36, "end": 308.28, "text": " Still hours to days?"}, {"start": 308.28, "end": 310.08, "text": " Well, not at all."}, {"start": 310.08, "end": 316.28, "text": " This orans not only in real time, it runs faster than real time."}, {"start": 316.28, "end": 322.64, "text": " Yes, that means about 200 frames per second for the new noise filtering step."}, {"start": 322.64, "end": 329.56, "text": " And remember, the light simulation part typically takes 4 to 12 milliseconds on these scenes."}, {"start": 329.56, "end": 332.0, "text": " This is the noisy mess that we get."}, {"start": 332.0, "end": 336.15999999999997, "text": " And just 5 milliseconds later we get this."}, {"start": 336.15999999999997, "end": 338.52, "text": " I cannot believe it."}, {"start": 338.52, "end": 339.52, "text": " Bravo."}, {"start": 339.52, "end": 343.2, "text": " So real time light transport simulations from now on."}, {"start": 343.2, "end": 346.4, "text": " Oh yes, sign me up right now."}, {"start": 346.4, "end": 348.35999999999996, "text": " What a time to be alive."}, {"start": 348.35999999999996, "end": 350.15999999999997, "text": " So what do you think?"}, {"start": 350.15999999999997, "end": 352.03999999999996, "text": " Let me know in the comments below."}, {"start": 352.03999999999996, "end": 355.71999999999997, "text": " This video has been supported by weights and biases."}, {"start": 355.71999999999997, "end": 356.71999999999997, "text": " Look at this."}, {"start": 356.71999999999997, "end": 362.24, "text": " They have a great community forum that aims to make you the best machine learning engineer"}, {"start": 362.24, "end": 363.32, "text": " you can be."}, {"start": 363.32, "end": 368.24, "text": " You see, I always get messages from you fellow scholars telling me that you have been"}, {"start": 368.24, "end": 373.48, "text": " inspired by the series, but don't really know where to start."}, {"start": 373.48, "end": 374.8, "text": " And here it is."}, {"start": 374.8, "end": 380.48, "text": " In this forum, you can share your projects, ask for advice, look for collaborators and"}, {"start": 380.48, "end": 381.48, "text": " more."}, {"start": 381.48, "end": 389.6, "text": " Make sure to visit wmb.me slash paper forum and say hi or just click the link in the video"}, {"start": 389.6, "end": 390.6, "text": " description."}, {"start": 390.6, "end": 395.76, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 395.76, "end": 397.08, "text": " better videos for you."}, {"start": 397.08, "end": 401.03999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=L3G0dx1Q0R8
Google’s New AI: DALL-E, But Now In 3D! 🤯
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "DreamFusion: Text-to-3D using 2D Diffusion" is available here: https://dreamfusion3d.github.io/ Unofficial open source implementation: https://github.com/ashawkey/stable-dreamfusion Interpolation: https://twitter.com/xsteenbrugge/status/1558508866463219712 Full video of interpolation: https://www.youtube.com/watch?v=Bo3VZCjDhGI ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karozona Ifehr. Today, we are going to see how this new AI is able to take a piece of text from us, anything we wish, be it a squirrel dressed like the king of England, a car made out of sushi or a humanoid robot using a laptop and magically. It creates not an image like previous techniques, but get this a full 3D model of it. Wow, this is absolutely amazing, an AI that can not only create images, but create 3D assets. Yes, indeed, the result is a full 3D model that we can rotate around and even use in our virtual worlds. So, let's give it a really hard time and see together what it is capable of. For instance, open AI's earlier, Dali, text to image AI, was capable of looking at a bunch of images of koalas and separately a bunch of images of motorcycles, and it started to understand the concept of both and it was able to combine the two together into a completely new image. That is a koala riding a motorcycle. So, let's see if this new method is also capable of creating new concepts by building on previous knowledge. Well, let's see. Oh yes, here is a tiger wearing sunglasses and a leather jacket and most importantly riding a motorcycle. Tigers and motorcycles are well understood concepts. Of course, the neural network had plenty of these to look at in its training set, but combining the two concepts together, now that is a hint of creativity. Creativity in a machine, loving it. What I also loved about this work is that it makes it so easy to iterate on our ideas. For instance, first we can start experimenting with a real squirrel, or if we did not like it, we can quickly ask for a wooden carving, or even a metal sculpture of it. Then we can start dressing it up and make it do anything we want. And sometimes the results are nearly good enough to be used as is even in an animation movie, or in virtual worlds, or even in the worst cases, I think these could be used as a starting point for an artist to continue from. That would save a ton of time and energy in a lot of projects. And that is huge. Just consider all the miraculous things artists are using the Dolly Tool, Text to Image AI, and Stable Defusion 4, illustrating novels, texture synthesis, product design, weaving multiple images together to create these crazy movies, you name it. And now I wonder what unexpected uses will arise from this being possible for 3D models. Do you have some ideas? Let me know in the comments below. And just imagine what this will be capable of just a couple more papers down the line. For instance, the original Dolly AI was capable of this, and then just a year later this became possible. So how does this black magic work? Well, the cool thing is that this is also a diffusion-based technique, which means that similarly to the text to image AI's, it starts out from a piece of noise, and refines this noise over time to resemble our input text a little more. But this time, the diffusion process is running in higher dimensions, thus the result is not a 2D image, but a full 3D model. So, from now on, the limit in creating 3D worlds is not our artistic skill, the limit is only our imagination. What a time to be alive! Wates and biases provide tools to track your experiments in your deep learning projects. What you see here is their amazing sweeps feature, which helps you find and reproduce your best runs, and even better, what made this particular run the best. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wates and Biasis is free for all individuals, academics, and open-source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wates and Biasis for their long standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karozona Ifehr."}, {"start": 4.64, "end": 11.200000000000001, "text": " Today, we are going to see how this new AI is able to take a piece of text from us,"}, {"start": 11.200000000000001, "end": 16.48, "text": " anything we wish, be it a squirrel dressed like the king of England,"}, {"start": 16.48, "end": 23.48, "text": " a car made out of sushi or a humanoid robot using a laptop and magically."}, {"start": 23.48, "end": 30.92, "text": " It creates not an image like previous techniques, but get this a full 3D model of it."}, {"start": 30.92, "end": 37.32, "text": " Wow, this is absolutely amazing, an AI that can not only create images,"}, {"start": 37.32, "end": 39.96, "text": " but create 3D assets."}, {"start": 39.96, "end": 48.28, "text": " Yes, indeed, the result is a full 3D model that we can rotate around and even use in our virtual worlds."}, {"start": 48.28, "end": 54.6, "text": " So, let's give it a really hard time and see together what it is capable of."}, {"start": 54.6, "end": 62.2, "text": " For instance, open AI's earlier, Dali, text to image AI, was capable of looking at a bunch of images"}, {"start": 62.2, "end": 70.6, "text": " of koalas and separately a bunch of images of motorcycles, and it started to understand the concept"}, {"start": 70.6, "end": 77.32, "text": " of both and it was able to combine the two together into a completely new image."}, {"start": 77.32, "end": 80.75999999999999, "text": " That is a koala riding a motorcycle."}, {"start": 80.75999999999999, "end": 88.44, "text": " So, let's see if this new method is also capable of creating new concepts by building on previous"}, {"start": 88.44, "end": 97.0, "text": " knowledge. Well, let's see. Oh yes, here is a tiger wearing sunglasses and a leather jacket"}, {"start": 97.0, "end": 104.35999999999999, "text": " and most importantly riding a motorcycle. Tigers and motorcycles are well understood concepts."}, {"start": 104.36, "end": 109.4, "text": " Of course, the neural network had plenty of these to look at in its training set,"}, {"start": 109.4, "end": 114.76, "text": " but combining the two concepts together, now that is a hint of creativity."}, {"start": 115.4, "end": 123.0, "text": " Creativity in a machine, loving it. What I also loved about this work is that it makes it so"}, {"start": 123.0, "end": 130.6, "text": " easy to iterate on our ideas. For instance, first we can start experimenting with a real squirrel,"}, {"start": 130.6, "end": 138.12, "text": " or if we did not like it, we can quickly ask for a wooden carving, or even a metal sculpture of it."}, {"start": 138.51999999999998, "end": 143.95999999999998, "text": " Then we can start dressing it up and make it do anything we want."}, {"start": 143.95999999999998, "end": 151.48, "text": " And sometimes the results are nearly good enough to be used as is even in an animation movie,"}, {"start": 151.48, "end": 159.0, "text": " or in virtual worlds, or even in the worst cases, I think these could be used as a starting point"}, {"start": 159.0, "end": 165.88, "text": " for an artist to continue from. That would save a ton of time and energy in a lot of projects."}, {"start": 166.44, "end": 173.08, "text": " And that is huge. Just consider all the miraculous things artists are using the Dolly Tool,"}, {"start": 173.08, "end": 177.8, "text": " Text to Image AI, and Stable Defusion 4, illustrating novels,"}, {"start": 178.44, "end": 185.88, "text": " texture synthesis, product design, weaving multiple images together to create these crazy movies,"}, {"start": 185.88, "end": 193.79999999999998, "text": " you name it. And now I wonder what unexpected uses will arise from this being possible for 3D"}, {"start": 193.79999999999998, "end": 200.44, "text": " models. Do you have some ideas? Let me know in the comments below. And just imagine what this"}, {"start": 200.44, "end": 207.0, "text": " will be capable of just a couple more papers down the line. For instance, the original Dolly AI"}, {"start": 207.0, "end": 215.16, "text": " was capable of this, and then just a year later this became possible. So how does this black magic"}, {"start": 215.16, "end": 221.72, "text": " work? Well, the cool thing is that this is also a diffusion-based technique, which means that"}, {"start": 221.72, "end": 228.68, "text": " similarly to the text to image AI's, it starts out from a piece of noise, and refines this noise"}, {"start": 228.68, "end": 236.28, "text": " over time to resemble our input text a little more. But this time, the diffusion process is running"}, {"start": 236.28, "end": 243.07999999999998, "text": " in higher dimensions, thus the result is not a 2D image, but a full 3D model."}, {"start": 243.08, "end": 251.32000000000002, "text": " So, from now on, the limit in creating 3D worlds is not our artistic skill, the limit is only our"}, {"start": 251.32000000000002, "end": 258.04, "text": " imagination. What a time to be alive! Wates and biases provide tools to track your experiments"}, {"start": 258.04, "end": 263.88, "text": " in your deep learning projects. What you see here is their amazing sweeps feature, which helps"}, {"start": 263.88, "end": 271.64, "text": " you find and reproduce your best runs, and even better, what made this particular run the best."}, {"start": 271.64, "end": 278.44, "text": " It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more."}, {"start": 278.44, "end": 285.47999999999996, "text": " And the best part is that Wates and Biasis is free for all individuals, academics, and open-source"}, {"start": 285.47999999999996, "end": 292.84, "text": " projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video"}, {"start": 292.84, "end": 298.84, "text": " description, and you can get a free demo today. Our thanks to Wates and Biasis for their long"}, {"start": 298.84, "end": 304.11999999999995, "text": " standing support, and for helping us make better videos for you. Thanks for watching, and for"}, {"start": 304.12, "end": 333.96, "text": " your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=NRmkr50mkEE
Ray Tracing: How NVIDIA Solved the Impossible!
"❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers\n\n📝(...TRUNCATED)
" And dear fellow scholars, this is two minute papers with Dr. Karojona Ifehir, or not quite. To be (...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.8, \"text\": \" And dear fellow scholars, this is two minute papers wi(...TRUNCATED)

Dataset Card for "two-minute-papers"

More Information needed

Downloads last month
45