Library/Spotlight

Back to Library
TED TalksCivilisational risk and strategySpotlightReleased: 31 Aug 2023

The AI-powered tools supercharging your imagination | Bilawal Sidhu

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from TED Talks. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 102 full-transcript segments: median 0 · mean -1 · spread -119 (p10–p90 00) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
102 slices · p10–p90 00

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 102 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation under intake methodology.

ai-safetyted-talks

Play on sAIfe Hands

On-site playback is enabled when an episode-level media URL is connected. This entry currently points to a source page.

This entry currently has a show-level source URL, not an episode-level media URL.

Episode transcript

YouTube captions (auto or uploaded) · video oWAhBR19loM · stored Apr 8, 2026 · 2,924 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/the-ai-powered-tools-supercharging-your-imagination-bilawal-sidhu.json when you have a listen-based summary.

Show full transcript
the future of work looks a lot more like people aren't playing in instruments in the symphony they're becoming the orchestrators if you're passionate about something the tools may change the Technologies may change but then you're willing to adapt and the way you find your passions you just go and pay stuff and then I really sunk my teeth into Maps where I spent almost four years working on a Next Generation digital twin of the world this idea of digitizing reality and then sort of Kit bashing it started with photogrammetry Nerfs added on to photogrammetry by allowing us to capture a lot more of the complexity of reality for Battlefront they went to all these locations where the Star Wars movies were filmed and just scan the crap out of everything gosh I think Apple just made VR cool let's Dive Right In Baby three two did you just drink without me bro dude you didn't even cheers man okay all right three two one cheers oh wow got a new coffee in the house baby bro this is a Starbucks Kirkland espresso coffee and uh it's it's a good one as you guys know we're very very much zooted all the time with the Starbucks coffee and we can never get too much of it but without anything else distracting and disturbing us it is episode number 20. a bad decisions go that's right here so we just want to say first of all welcome and second of all thank you thank you every single one of you guys who has stayed with us since episode number one up until now you guys are an OG you guys are amazing you guys are the reason why we're doing this every damn week honestly the day that we started I couldn't believe that the end of the one episode 29 I know it's it's been a crazier we never miss the week so far no weeks we even did two podcasts we did lose friends and sleep and and our mental health and our physical now I'm just joking some of them we did actually but it's been worth it because we made some great friends along the way every single week we talk and learn from all the inspiring guests and before we bring our guests today something that me and found out a discussion about last week and I just want to set the tone just very briefly the reason we're doing this podcast and we never really talk about it is we want to bring on inspiring guests who've done great groundbreaking work in any field honestly it doesn't have to be the fields that we're specifically invested in ourselves and we want to just bring them on to talk about their Journey their story and hopefully we can learn something from them and you guys can also learn something along the way and get inspired and to be honest every single time the podcast finishes I have learned something yeah and I cannot imagine like people who are coming from different backgrounds we had Ben who was a filmmaker we had Olaf was a photographer but their story and their Journey was so inspiring to us and I'm pretty sure it's so inspiring for everybody who we're listening yeah so it's it's all about giving you guys more reason to make that bad decision you start whatever you want to start and also learn from all these guests that we have damn right so far out uh let's not waste any more time man we have a freaking special guest today and I I will let you do the honors tell everybody who we got on so our guest today is an ex-google product manager who has made significant contribution to some of the most groundbreaking projects but as I'm pretty sure you know about Google immersive view oh yeah I do you know about Google Maps hell yeah he even worked on the geospatial API damn right and what is even crazier besides doing all of this he has made a huge community on social media over 300 000 subscribers on YouTube and around a million followers on Tick Tock video he has so many freaking videos all of our social media and I I'm like how do you even manage to make all of these while working on all these cool projects at the same time let's let's bring him on our guest today is beloved how's it going great to be here I got the I got the classic cup of coffee we're about to get caffeinated and uh and given the sheer amount of overlap we have in terms of like uh you know the modalities of creation we're interested in I'm excited to dig in so thank you for having me on no worries man it's a pleasure to finally be able to speak with you here first of all came into all uh a cheers boom we hope everyone watching is also having some sort of drink coffee tea whatever is your thing just go for it you know this is gonna be a very enjoyable podcast and before we start with anything serious I have to ask you you've made loads of VFX and CGI video especially on Tick Tock you're just scrolling through all of them and you started I guess doing covet what is with you and your addiction with UFOs and siren heads 200 of those videos which are amazing by the way it's awesome yeah it's uh so the UFO stuff I mean I've always been fascinated with this stuff like uh the first Hollywood movie that I saw that got me into visual effects was uh was Independence Day oh and honestly it was the behind the scenes for Independence Day or if you remember the scene where it's like the mothership over New York and just like yeah the explosion ensues so I've had a lifelong Fascination for that and then on Tick Tock specifically if you remember the raid Area 51 Trend that started off oh yes so it's like you know making a bunch of random videos and I just like put one of those videos out there and it just like took off right and it was like one of those like hockey puck like holy crap what's happening here so you know as a wise Creator once told me if something worse you double down on the damn thing so uh it's it's been an amazing way for me to just like talk about these esoteric kind of scary phenomenon and it's the same thing with siren head man it's like there's this like that sound of the siren evokes this like visceral the Eerie sound right yeah yeah it's like you don't need like even a kid is like oh holy crap what is that and it's like the bomb raid siren right if you remember like in the world wars like that's the sound that like an entire generation grew up with so there's something deeply visceral in us about that and something about like like scary spooky content is very interesting to me and so visual effects are such a cool way of like I don't know painting this sort of universe and every short is like a vignette into this like sort of alternate spooky esoteric Dimension you know 100 and I think every person who is a 3D artist or a 2d artist no matter where you are there is that point that that that singular point that you can go back to be like this is the moment where I got inspired to get into this or I got motivated and for you it was Independence Day then and the best the best part is also when you can recreate your childhood memories and you know now you have the power to not like I've watched that movie I want to recreate that that's the best feeling that's so true I feel like as 3D artists the first thing you realize you're like wait a second I can now like remake all the movies that I love or like different scenes that I love you didn't even I think the first video uploaded on Twitter was the Iron Man video or something yeah yeah it's all right yeah totally super cool stuff you worked on and how did is that how you were sort of let into this journey of working at Google can you walk us through that what was that Journey like because you started you know 3D animation with uh 3ds Max if I'm not mistaken or yeah so how did that lead you into going into exactly like that's that's a really long time ago how did that lead you into going into Google and and ending up here today essentially yeah I mean it's a it's like uh you know I'd say the theme you already hit on like uh I I at the age of 11 I was really into flash five at the time and cartoon animation you know just like this is the early days of the web and I saw this cartoon right effectively like this uh TV show on Discovery Kids that was about Mega Movie Magic how do you make these visual effects and I kind of fell in love with the notion that my computer could like the same computer I was using to make these like colorful cartoony like you know flash animations could be used to kind of seamlessly blend reality and Imagination exactly what you said about you know sort of taking stuff you see on the Silver Screen as it was called then and sort of like recreating it like like really crappy backyard lightsaber battles with your friends with like a crazy like you know spaceship Landing so you know when I was 14 like I I obviously got deep into 3ds Max then obviously he's like Maya was the tool to learn and started picking up after effects and so like at 14 15 you know right I was entering High School I was like oh my goal in life is to be a visual effect supervisor and I want to go work at ilm and I want to have like you know your name credit at the end of the movie like that was sort of the dream and then as I got older I went to boarding school and like got exposed more to the technical side of things and I really enjoy this idea of like building actual things and so that led me to like despite getting into my film School USC like dream Film School USC I decided to study computer science and business and this is right around when the mobile boom was sort of happening like the App Store had just come out I graduated 2013 and everyone was like Hey like we gotta like a Web 2.0 is cool but how do we like Go Mobile first and so that led me into product design right after graduation I worked at a consulting company called Deloitte digital it was their like Innovation arm and I did basically like product design stuff for two years and that was interesting because all the stuff again I learned with After Effects 3ds Max Maya just kind of got fueled into like the earliest instantiations of AR VR so this is like Circa 2013 uh was like you know the Google Glass was the thing right this is yeah that time like nobody was talking about AR totally and if they were they were like oh what is this dorky thing like we were barely coming to grips with what you could do on the phone right like uber and like ride sharing apps were just reaching ubiquity Instagram was just sort of starting to blow up same thing with Snapchat and so forth and so yeah it's interesting like I kind of let the YouTube Creator thing go completely through college got into basically building AR VR experiences uh as I graduated uh first Enterprise really boring stuff like a field service expert on an oil rig going through like a maintenance checklist because they need to be hands-free while they're assessing and you know kind of like uh you know fixing the equipment and then VR happened you know the DK2 came out uh meta acquired uh freaking uh Oculus like Oculus at the time right like and so that kind of like 360 video suddenly popped up and again like that like really cheesy Steve Jobs quote of like the dots connect looking back suddenly like all the skill that I'd learned around like 3D visual effects Etc just like manifested itself in AR VR so like I like to describe AR as basically the visual effects pipeline running in real time on the phone in your pocket or the headset on your face all the same stuff right yeah and so yeah that got me into VR video and then I was like all right Enterprise is cool but like I really want to do this at scale and so at that time like Circa 2017 Google was like building out this like uh you know essentially VR initiative to create 360 and 3D VR camera systems worked on that then worked on AR like you know this is like three hype Cycles ago so VR was the first hype cycle then AR was the second hype cycle and then I really sunk my teeth into Maps where I spent almost four years working on you know a Next Generation digital twin of the world and then creating consumer and developer experiences from that so the immersive View and the geospatial API that you alluded to but along the way the theme has just been blending reality and Imagination and I would say Google it was more of the utilitarian version of that blending the physical and digital world so you know along the way I discovered various roles I eventually ended up enjoying product management Etc and so yeah like after doing my six years and after my YouTube Tick Tock and all this stuff like sort of hit a threshold as this generative AI wave is is uh sort of picking up steam I was like hey let's Dive Right In baby oh here we are having a conversation that's an amazing but I I want to go back a little bit and ask one question because you mentioned you had two choices one was going for computer science and film school at that time your decision to go for computer science was it solely a personal decision oh my God or was it peer pressure I mean coming from coming from Asian Heritage I personally forced into going to engineering school [ __ ] [ __ ] I did it so listen to this I went to engineering school on the first orientation that I went to the class they started talking I was like I'm out of here I'm out of here I I did computer science and I can say not Forest but definitely wasn't a choice I probably would have gone with if I was all alone about my own and when I went in I remember my parents um again they didn't force me but they did sort of push the decision into my face like hey science and their excuse was you're good with computers that's how I ended up there how did you end up with uh no yeah the question is like personal choice and it's okay I mean of course it's okay to have I mean peer pressure at that time it's normal yeah no it's a great question I think you know I again you know being from South Asian Indian Heritage like I think I like all immigrant like families kind of face this sort of you know I was talking to my Jewish friend the other day it's the same thing like it's like you know are you a doctor lawyer engineer yeah pick one luckily my parents were pretty good about this like I would say like my mom like got me into like got me a computer very early you know I spent half of my life in India like so she got me like a freaking internet connection before not 56k modem throwback you know before anyone had that type of stuff or it was a rarity and you know like for a while they really like spurred my passion would say around the 10th grade uh like around 10th grade came around there was definitely a little bit of peer pressure it's like oh beta you know if you want to be successful in life you know you better you know you better start your stem scores better be you know keep going up and so you know they never actively forced me but what I like to joke with them is like hey you're propaganda worked and it's not always about that like it's your friends too like it's not always the parents like all all of my friends went to engineering yeah most of my friends went to Science school it's not even even if you put the parents peer pressure aside it was rarely that someone could pick like I wanna study art dude Iran where we're from art school is looked down upon like if you're [ __ ] in school you go to Art School like that's that's like that's how they look at it which is horrible I remember growing up up until High School thinking oh [ __ ] did you hear this guy went to Art School like that was like if you failed in life you would end up in art school and it's funny because all our life is now related to art and we're like totally we're more successful in our minds and happy as as we've ever been and so it's it's just crazy how these things work totally but I mean like I'm sure you empathize with this too like looking back like the way I slice the needle is I'm going to study computer science and business because I wanted to do both I didn't want to just do CS I love the business side of like it's cool to have Cool Tech but how do you actually deploy this at scale to either you know like really make an impact on the world whatever metric of success you have there right but at the same time like dude like College in the U.S is so flexible you know my advisor was so awesome I took all these like Maya classes and new compositing classes but then also took like business intelligence classes that were like this is like the earliest instantiation of machine learning being applied to Big Data was the biz buzzword back then so I was able to make this sort of like really well-rounded curriculum that if I had just gosh I think about like if I had gone to film school because I have a lot of friends who did that it's like well here's like let's go shoot like on actual film and let's go make your like thesis film like and spend a year doing that I don't think those skills would have as readily transitioned into the world you know we live in today like even the stuff you're doing with Unreal Engine I bet the fact that you spend some time in engineering probably makes it easier to pick up those type of things so yeah you know hence creative technology man I think like it's the intersection of like being able to wheel the world of bits and the world of atoms kind of together is just like yeah there's some Secret Sauce there so yeah that's a little bit of the story and uh looking back no regret if you're happy where you are today there shouldn't be any regret even if you think you might have done something wrong because if you did that that led you to where you are today even if it was wrong or right now uh one thing I do want to mention is you brought up doctor lawyer engineer these are the three categories that most parents at least faced with our tradition prays and I want to give some credibility to you before I even ask this question you're playing and you're experimenting with all the latest AI Technologies every single day and we see it all over your Twitter feed and so for everyone watching you are essentially someone who can truly answer this question really well doctor lawyer engineer with all the things that are happening with AI and and we know where AI we can sort of predict where AI is going right where we talked about this with our previous guests with fire Hardware general doctors right they essentially here in Canada Healthcare is pretty [ __ ] like we've only been here for about a year and just what I'm realizing is I go to the doctor first of all getting into the queue is difficult getting to a specialist is even more difficult the the wait times are crazy and So eventually when there's going to be an AI that can solve every single one of your questions and problems regarding your health can read your health essentially and answer that or let's talk about a lawyer all these contracts that can be written with AI in the quickest way possible in the best way possible how do you think the the new generation will gen Z go into yeah go into education system and career paths considering how AI is changing the world it's a great question um and in a very profound one that obviously has people either super excited about the disruption that's ahead or sort of entrenched retreating into this defensive position of like pushing back on any change that lies ahead so I think like what can we be sure if the world is absolutely going to change the the way job functions look today may not look the same way in terms of what how it'll actually manifest itself I have a couple of like my here's my thesis right my thesis is that right now specialization is is deeply valued in the world we live in right like so you all are from Iran you know there's this joke like where like the the amount of phds in Iran is just like way too damn hot I mean I went to Bay Area and so like you walk around like Stanford campus like it's just like everyone's Iranian and they're they have a PhD right it's like and it's like this that's the like the Apex of specialization right your sub-specializing in this like this like this is like the tip of the iceberg effectively right and you're losing the broader picture so and that applies in all professions right if you think about the like the creative industry you work in working 3D animation okay so are you like a modeling artist texturing artist or your rigger are you an animator you know are you good at lighting are you compositor the generalist has really been emerging as a profession I would say in like social media role right like where you have more people trying to do with less and so effectively most Industries including computer science right like you sub-specialize in some discipline hey are you a front-end engineer specializing in these set of Technologies or back end in these type of Technologies and same thing with machine learning only recently in machine learning would like sort of the Transformer architecture we're seeing sort of one like sort of you know kind of approach applied to a broad swath of problems but historically even within a subset of AI like computer vision there's been a lot of sub-specialization so in other words you have to sort of be this like t-shaped person if you've heard of the t-shaped analogy you're Broad in certain areas and you're deep in other areas what I think is happening with AI and like AI agents multiple sort of AI entities that you can work with that are specialized in different domains as you go from being t-shaped to being a tripod or a table that's like sort of the image I like to evoke when we talk about this stuff so you know if you're a lawyer that's like deep in for example corporate law right now which is very boilerplate contractual stuff I mean homie that job is going away like that you know like I don't think there's any way about it right and I think the same applies to for example like if you break down like all the newsletter content that exists that's purely summarizing the news like that's obviously going to be automated away but we as humans I think are going to just start creating new roles that orchestrate these AI agents along with our own well of expertise and other humans well of expertise at this higher level of abstraction so to me the future of work looks a lot more like people aren't like playing in instruments in the symphony they're becoming the orchestrators and how that manifests and permeates across all the roles you mentioned I think could vary but certainly it applied to doctors too right you're totally right like I think for the foreseeable future humans will feel very confident if a human has a final call on especially if you're getting you know surgery or something thing like that done but when it comes to hey looking at your functional you know fmri scan or looking at your CT scan or looking at you know like certain symptoms and data points from your Apple watch and doing this crunching this sort of Big Data analysis of course we want machines doing that so like I think human intelligence will sort of coexist with machine intelligence and we'll just just like you go on a slack and you might have a bunch of full remote employees I've got a feeling it'll sort of be like we're all just going to be working with these AI agents that help us get stuff done in the world and I don't think that'll mean we'll all be sitting on our butts in VR headsets like the movie Wall-E I think I think that only happens if there's like a fixed amount of Labor that's needed in the world right like I don't think that'll be the case we'll conjure up sort of new job lines you know new canvases for creation even like I don't think we're gonna fully automate away creation and and be like content with like sort of the dystopian point of like a tick tock style algorithm just procedurally giving you like infinite content We're Not Gonna excuse me don't worry maybe there's a place for that but that's not going to be the ubiquitous thing what I'll end by saying is like the the challenging part about this transformation than any other one which is why a lot of people are freaking out about this is I think the rate at which we're creating jobs might be slower than the rate at which we're eliminating jobs and so it's gonna be a rocky transition but best believe I think on the other side of this we'll still be doing cool stuff it's just we'll be wielding these tools to create stuff at this much higher level of abstraction I love the orchestra analogy that was beautiful yeah so so if someone DMS you and say hey B level I'm 14 years old now you know 13 14 and I don't know what to do and I'm scared of all the changes that are happening I'm afraid if I go to law school whatever I learn is done by an AI tomorrow or if I go to business school or if I go to even if I do 3D yeah so how would you acquire science anything how would you advise someone who's 13 or 14 like you can even go higher than that you can go someone in like like trying to finish college like before going to UNI yeah so how would you advise them and say what they should do in order to be on top of their game and make the most out of what is happening right now you know I do get this question a lot especially from like people who are trying to be product managers and they're just want to be a PM because that's like the cool thing to be oh I don't want to be an engineer I just want to go be a PM and honestly it's like not it's one of those like I don't pretend to have some special advice here my advice is just to go taste things I think like it is easier in history today to taste things than ever before like just two years like a year ago we actually have to read books now you can go ask questions to a book right like isn't that crazy yeah there's this distillation of all human knowledge or at least documented human knowledge on the internet which does include Reddit so we can talk about how much knowledge there is there but you can query this well of expertise and try things make things easily rather than like just let's take the digital art example like rather than spending months learning how to use a pressure sensitive Wacom tablet getting a license for procreate or photoshop hey why don't you just make some mid-journey art just try making it is it fun does it like light up your soul is that the thing that like is the thing you want to do even if you didn't get paid I'm a big believer in just like finding your passions because that's the only way you'll be resilient to change if you're doing things because that's the thing to do the moment it's like hard and the going gets tough like the you know the tough gets going or whatever that saying is I think like if you're passionate about something the tools may change the Technologies may change but then you're willing to adapt and kind of harness the new set of capabilities at your disposal so I think like adaptability is going to be a skill that is very important in sort of this like everyone talks about this like asymptotic curve like we're about to hit like you know an immense amount of change very quickly so I would say find your passions and the way you find your passions you just go and pay stuff like dare I quote garyvee because he just like goes overboard with the you got to make like 32 points a day you got to be hustling every single no now you have the tools to just at least go taste stuff like even on like YouTube if you want to learn about a certain profession like rather than going and enrolling in some sort of a class or whatever the likelihood is you could just go find a video that tells you what you need to do or you could go find a person that's doing exactly that and you get to taste things and then once you find the stuff that you're passionate about go pursue that and then the rest is just a means to an end AI is like if we're truly building a general intelligence it is a tool for all sorts of things so you got to find the thing that you want to do day in and day out right and that may be visual creation for you certain types of visual creation that may be building software if that's what you want to do yo you don't have to go to CS school anymore you already didn't that was already kind of getting outmoded as you saw a bunch of companies like sort of that not making a you know bachelor's or master's degree mandatory and now you pour gasoline on that the joke about like the best programming language to learn this year is English I mean that's absolutely true like go learn English and like use co-pilot and chat GPT and make a product and launch it and you are seeing people do this right on Twitter Now sort of the solopreneur hype so that would be my advice like find the passions and then use the technology and sort of these platforms at our disposal to discover them and taste them you know it's it's funny you brought that up because I I believe it was your own tweet posted as a carousel on Instagram that you showed me yesterday that had a saying I believe whereas there's all these opportunities but the problem also is canceling out the noise like there's so much happening I'm not sure if that was you was it you showed me a Twitter thread I posted as a carousel anyways just definitely sad stuff like that so yeah exactly so just about the concept of yes now there's all these possibilities but at the same time there's just so much like where do you start like how how can you adapt so fast every single day now the reason I brought that up as well is mainly because I feel like a lot of people still don't know about all these AI Technologies a lot of people we speak to here in Vancouver still haven't used chat GPT which that makes me really shocked I thought because I mean our Twitter feed our Instagram feed is all about AI because that's what we look at so my mentality is when I go out when I go to gym which you know the age group is bit somewhere between like 18 to 45 mostly when I see someone in the streets when I see someone in our lift I assume they know about at least chat GPT which is like the most famous and probably the one the easiest to use essentially yeah that's the most like user friendly but we get so many no I don't know or no I haven't tried it or should I try it I haven't played with it so that that's like the whole education thing is really crazy about this thing and the awareness so what do you think about the adoptability of all of these AI Technologies and agents because in our world clearly it's it's quite fast but apparently for the rest of the population which I'm assuming is around like 80 percent of the world they still probably haven't even used AI once do you think because it's definitely not matching like the actual use cases is not matching the actual usage of what what I'm seeing what do you think about that I think you're absolutely right like we we definitely live in a bubble right like and it's this like sort of early adopter Tech bubble online where everything's about like large language models and neural neural Radiance fields and all this other stuff and I think this even applies at the highest levels of these these Enterprises right like if you think about just photogrammetry most of the photogrammetry companies the players that have been around for 15 20 years aren't thinking about Nerfs right now this is like this early new thing for them and I think the same applies to large language models right it's like it's it's a good that the whole world is not using them because I don't think we have enough gpus to like to power all the queries right now certainly you know like uh open AI is hitting scaling restrictions and like it's funny at the end of the day it's like Jensen that's just going laughing to the bank like I just keep joking it's like the dude could buy like 30 leather jackets per second now it's like just like just like money's coming in it's like no matter how AI pans out like you know homie set Nvidia is in a great position now I think like when will we see people adopting this I think there's an interesting case study here with generative Phil by Adobe like okay so sensibly how I got really excited about like generative stuff was Dolly to end painting yeah this is like I want to say exactly a little over a year ago maybe 13 14 months ago before the summer of AI really kicked off last year um I was deep into Nerfs at that point and I was like oh this generative stuff I've seen some of the Gan instantiations it always looks like this melty human it's gonna suck and then I used in painting to to make some like you know Mothership UFOs oh my God I knew it it's just like of course you know it's like a very predictable and so when I did that I saw like holy crap it got the lighting right the sun Direction like holy [ __ ] the reflections look good like all the stuff that you would have done manually otherwise right with like hdri lighting and just approximating stuff manually and like Maya or whatever it it kind of the model was able to reason about it or at least like give the illusion that it's reasoning about these things and produce a compelling end result but man like nobody really like there was like a small number of creators that were making stuff with Dolly to in painting and we were all calling it in painting and AI fast forward a year the de facto image editing software Adobe Photoshop right like it's like hey we got this new beta it's called generative fill yeah people are using it it spawned entire categories of memes I made a random video where I took like Tim Sweeney's like photo of a hike and just added like a dinosaur and like a bear drinking beer together and like I got a million views for some reason and so like you're seeing people use it and adopt AI now but without talking about it like AI they're just like hey it's the cool new feature in Photoshop the thing I already love and I think the same is gonna happen with Chachi pts capabilities like obviously like I'm a little biased but I think like Google's gonna crush it here like it's the biggest search engine there's Gmail there's docs all these things that a bunch of early adopters even like me are using Chrome extensions for when it's just in your tool and you're writing a doc and it gives you better autocomplete or a notion like they've added the AI feature I think people will start using and adopting AI but it'll be in a way it's kind of transparent to them they don't care about what the underlying model is it just like gets the job done for them and I think we're at the very early day is of that adoption curve right like I think we got maybe even optimistically like three years before like everyone's kind of about this thing yeah yeah and maybe longer who knows just one thing sorry when you brought when you brought up the concept of memes you generally feel have you guys seen the PornHub memes no like I'm not even joking like it's literally an image I mean it's nothing bad it's just an image probably like a thumbnail of what the porn thumbnail was I guess and like they do it generated yeah it feels up and it's like they're just like dancing together with clothes that's probably the funniest generative field memes I've seen but that's awesome one thing I think your point of view was really interesting I had the same point of view about nfts and web3 the nfts and web tree will get Mass adoption when people use nft without knowing they are using nfts 100 so so the same thing I think what you brought up with AI is actually very very true like what Adobe did with the generative feel is exactly the same moment that oh AI what is it I know I'm just typing something and clicking a button and it happens oh that's AI cool like this like magic it should just happen without you even [ __ ] knowing yeah so I believe that I think with every technology that applies like people don't care about the terms people don't care about the technology people don't care how hard is it to do that thing people just want to do things faster and easier so the moment they achieve that you know they will adopt it yeah I I generally agree with that as well and you brought up adobe we were talking about this because like Adobe could have died out if they were just like not giving a [ __ ] it's so funny because like all these smaller companies were like yeah we have this cool technology Photoshop killer they were like Photoshop was like yo we were doing this too by the way if you didn't know and then they do it and now because like you mentioned Photoshop has always been the go-to software for editing and now like I'm sorry but all the other smaller companies they're now just gonna have such a hard time because like people are already used to photoshop that was the brilliant move I hope every other companies trying to make we made a switch to DaVinci Resolve recently and Blackmagic has their own AI tools yeah it's like they they're amazing and there's reasons why so many creators switch to DaVinci resolved but now I'm like did I make the right [ __ ] move because like Premiere I look like that Meme that is looking again what the hell man guy right just like [ __ ] at the end of this video I just switched I learned everything and now but I I hope Blackmagic really brings AI but it's one thing even if Blackmagic does bring AI it's like they will not have and never have access to the amount of I guess data that adobe has access to because of the previous you know user base yeah exactly the user base and I mean I'm looking forward to it but the Adobe Firefly for video like making your own sound effects adding color grading transcripting all this stuff that is gonna have ai integrated is gonna make editing so much easier but at the same time you have people like Runway ml who are just trying to completely demolish the way we do editing just like type it in bro like everything you want just type it in we'll make it happen it's gonna be really crazy seeing all of this it's like a talk of far you know like who's pulling who's playing that's actually the exact right analogy it's like it's like a tug of war it's like uh it's like the one one side of it is like and then they're kind of converging towards each other and this theme like kind of has happened it like you probably noticed this with like you know real time and offline rendering sort of went through this sort of phase where it's like initially they were so different right like unity and unreal but then they start getting more and more photorealistic and better and better and better and same thing happened with like you know redshift and octane it's like okay there was the days of render man and Arnold and like non-descript warehouses of just like CPUs crunching like for days on a frame a video or something and then slowly like oh yeah you got an Nvidia GPU you got some Cuda like okay cool redshift octane way faster you can do motion graphic stuff and they start the offline stuff started getting more real time and so I think like the the way these companies like the incumbent and then the the upstarts that are challenging their dominance approaching it is like the same thing and the tug of war is a beautiful analogy if I may like build on that a little bit it's like it's like hey I just sort of bring out my popcorn which is why it's so fun like right now it's like because like we're sort of unbiased in this right we'll use the best tool for the job there's nothing that locks Us in though I do want to come back to Da Vinci it's like I'll come back to it later because like oh Premiere is just so unstable sometimes it's just like kills me but it's like after effects I can't leave it but Premiere is like uh you know people like Matt wolf are like hey man resolve's pretty good like yeah just never crashes on you same thing with the corridor folks so but to your tug of war it's like Adobe is going to meet users where they are they've got distribution it's the same thing with Google right like they've got these surfaces these billion user surfaces and Adobe I don't know what their exact Photoshop numbers are like but I would assume they're at least tens of millions if not hundreds of millions of users right and so like let's meet users where they are and Infuse AI in a way that's transparent to them take advantage of distribution brand awareness all that on the other hand I think Runway can't do that right like so Runway has to reinvent how creation is done because if they just made another slightly modified version of After Effects which is what it seemed like to me like I've been following Runway from the start when it was more about like segmentation and green screen without a green screen and stuff like that it felt like a like replay basement for After Effects hey if you have lightweight compositing tasks or editing tasks do it here but now where they're going with Gen 1 and Gen 2 and sort of the generative models that started ostensibly with that latent diffusion paper right like I think it's super exciting to see what sort of Paradigm emerges but that's the I would say maybe the the billionaire trillion dollar question is like what is that new creation Paradigm I'm not convinced it's text I think like that's a deeply unsatisfying way for me to work with this stuff so I think maybe it'll be interesting to see sort of like another example is perhaps descript and Premiere and then you're talking about resolve too the script almost opens up editing to an entirely new audience right like there's a class of people that will never learn Premiere or resolve maybe we're we're in we're you know we're into this stuff but most people working at some company that want to make like uh you know like or even a solopreneur that wants to make package up their webinar or whatever they're like I can use docs and I know how to work in a deck okay if I can do that now you can make a video with like you know with the amazing capabilities in the script so I think like that's it opens up editing to a new class of users at the same time Premiere obviously is like sort of giving you that sort of training wheel mode inside of Premiere itself which is also interesting but I again think they'll always meet like Target different audiences right like somebody who's in in description in other words it's a tug of war and in another way it's also an expansion of the pie like you're opening up these new forms of creation to entirely new set of users that might have never bothered with this stuff now I do think the web-based tool that adobe's building for podcast editing and and all that type of stuff is more directly competing with something like the script and so I'm excited like I'm excited to have ai tools today because they fit in your workflow but I'm also excited to see what are the net new workflows that emerge and like how that brings a bunch of people to create that weren't creating like I mean one of my friends on Twitter is like Heather Heather's like you know like the mom of two in like Atlanta you know working on you know completely different stuff as her day job and then she somehow got into air and has like 15K followers or 20K followers on Twitter better now it's like I don't think that would have happened that quickly if not for AI so it's going to be interesting to see sort of what the landscape looks like just in a couple years from now but yeah I love that love that visual of sort of the uh the the tug of war if you will yeah you see the thing is like you mentioned Heather we don't know Heather but that's a great example to bring up there's other people like Mr grateful who using chat gbt as an agent was able to achieve a hundred thousand you know followers on Instagram whereas I would assume if it wasn't for chat GPT probably wouldn't be able to or wouldn't have the motivation to right it was because of chat gbt he was like you know what I'm gonna I'm gonna make a challenge and he started a new challenge which is I'm gonna get a hundred thousand dollars in Revenue again using charge yeah using charging which to me it sounds crazy but dude heated 100K in terms of followers which is not an easy feat for a lot of people especially on Instagram yeah there you go and he did it and like if he makes 100K it just goes to show them you can actually do so much with all these AI agents that so many people are afraid of because they they're afraid of losing oh my goodness I'm sorry I just so if you have our laptop open here and I just saw metahuman animator as an update so we're deeply into metahumans and they just announced this new update that makes facial mocab like 10x better and I just thought I was like holy [ __ ] anyways sorry about that there you go the virtual podcast you gotta do it now yeah it's like because it needs to be done it needs to be done man but essentially speaking we believe that all these AI tools can help enhance everybody's life if we know how to use them now I want us to come back to AI again and we're going to because I know a lot of people are interested in listening to your approaches and different things but we want to take a step back and let's talk about something that was announced a few days ago and that is nothing but the Apple Vision Pro headset so yeah you are doing the um the review of the announcement if you will uh while it was being announced you had a video sort of podcast with another friend of yours and we were watching some of the things you guys were talking about we also did our very own version as well and I guess what I want to ask you as someone who's worked in Google as someone who's been you know um extremely fascinated with the latest technology developments you watch the division Pro announcement there's people who are optimistic about it there's people who are pessimistic about it we want to get your thoughts first of all what did you think Apple did right that's the first question within that presentation gosh I think Apple just like made VR cool to put it very importantly in my opinion like okay there's a you know I wouldn't give them like a 10 out of 10 give them like eight and a half like there was definitely the scene with the dad with the yeah I was like only Apple's reality Distortion field is such that they can be like you know what we expect people in the house to just have this on all day it's your freaking kid's birthday you sit there with like yeah Timmy blow up the candles dude it was the creepy smile that Dad was giving staring at them like this I'm like oh my God I can't believe they actually did that I agree so yeah you know like a little rough around the edges I think they focus far more on like a solo experience but in terms of what they did right I think they finally made VR a good word right like I think for whatever reason whether it was like the conflation with web 3 whether it was like meta's rebranding whether it was just like how people were feeling about social media and sort of technology that sort of pulls you away from the world whatever the reason was it's sort of like petered out like the investment in arvr both to be honest kind of petered out and so I think like apple entering this new product category that ostensibly Zuck has been carrying right like for a while um with quests really selling subsidized headset to just drive that sort of install base I think makes this very interesting right like so now Google and Samsung are going to come out with a headset obviously zucks got his like Quest 3 coming out and the pro already and then you've got Apple in the play like I think what Apple announced is like the Tesla Roadster is this like really expensive you know Cutting Edge like the amount of technology that they managed to squeeze into that device crazy is awe-inspiring to me like I've used almost every headset there is other than the Vision Pro and like I cannot wait to get my hands on it and the fact that even jaded press that has tried all of that stuff like tried and they're having this sort of oh wow experience I think this is like just goes to show it's going to be an amazing device for developers to build on specifically because unlike every other device that uses these Qualcomm chips they've got that split compute architecture you've got this like M2 Chip just for the developer and then you've got these like 12 cameras and a bunch of other crazy sensors and there's a dedicated R chip R1 chip just to process that which is awesome for developer experience because like I don't have to worry about like my whole experience getting laggy because I'm hogging too much compute and suddenly the slam tracking is slowing down or whatever so I think this is like a really expensive sports car it's great for AR VR developers that want to develop for the like model 3 version that will eventually follow and at the same time for early adopters I think the pitch is pretty interesting which is if you want a massive media consumption device like this is this is the way to do it and we've sort of seen Apple like laying breadcrumbs towards this I like to call it like Apple's focus is connecting living rooms you know even two wwdc's ago I've been writing about this apple stuff for a while it's like two wwdc's ago they announced this thing called like share play so if you're in a FaceTime call you can suddenly share a tick tock display or Apple TV and if you have you know like air pods you get the spatial audio experience sort of like if we're sitting in a room we're talking about the virtual podcast like oh your sound is coming like spatialized from where you might be sitting and it was like yo that's the most Overkill feature to announce if you're just like on a zoom watching some media together like why is Apple doing this and obviously they're so patient they've just been layering these capabilities Brick by Brick and so now finally we're seeing like their sort of Early adopter Head kit you know headset you know kind of device hitting the market best believe they're not entering this you know product category if they don't have like n other devices like the specs for this was finalized in 2016 because Harvard Hardware has just such crazy lead times and so I'm like they knew then that this premium device was the way to go and the specs are beyond anything the vario nothing else freaking comes close so I'm super excited they did that right now there's going to be metaverse is going to stop being a bad word even though nobody even though Apple doesn't even say metaverse but it's it's going to revitalize investment in it because developers now have this new sort of ecosystem that's typically pretty lucrative when it comes to Apples like app stores and ecosystems to build on so I think it's going to be a Renaissance in AR VR and uh I can't be more thrilled about it uh okay first of all we we had sort of the same feelings towards everything that Apple announced we were more on the excited side of things even though we did see some negative comments and socials people still you know calling it too expensive the thing is we understand the pricing like you cannot have what five five thousand patents or 500 patterns more than that but you can have you cannot have all these unique Technologies built into this headset and have it be at 500 bucks which is like the the new headset that's coming out by meta there's a clearly difference here and like you mentioned this is gonna be the first version for developers to start building and then the other models which are going to come at a cheaper price probably with less specs are gonna come out eventually now one thing you brought up was social AR so we come from a background we used to create AR experiences uh a lot uh for Brands and for our clients and I'm sure you working you know in the same field of augmented reality you know a lot of people who create social show your experiences for themselves for socials or for brands do you think that as someone who creates AR experiences or is into VR they should be quickly making a switch into you know creating for Vision Pro do you see that being the future and not social ARS or do you think no they will each have their place or no this is going to take over what we you know do right now we we essentially take our phone and you know we we have the AR experience and the reason I ask that is because the software they're partnering with is unity and so a lot of AR experience in unity too right like Google is using Unity so now there is this mindset of okay if I go and learn Unity I can build for Apple Vision Pro I can work on the Google AR kit as well at the same time so what do you think about that you know it's like the way I break up AR experiences is into like front-facing and world-facing experiences right and and the vast majority of filters that still drive Impressions like I think it's like 80 or 90 of snaps from your usage is always yeah like it's like you know it's a personal messaging app and I think it's the same thing on like you know if you're developing example for Instagram right like you go to story mode usually you default opening up towards your face right it's it's self-expression so I think that's not going to change and I don't see the Apple headset like dabbling in that world right like in fact I would actually criticize Apple here and say like I had such higher hopes for reality composer when it came out in 2017. like it kind of was cool and like you could like you know send each other usdz files over like iMessage and you could open them up but they never really turned that into sort of like a lens experience like a Snapchat style lens that you could easily share or certainly never leaned into it all that happened in these sort of wild Gardens of Snapchat you know Facebook or meta Suites you know Google Stuff Etc et cetera that front-facing stuff given how you know we are self-expressive and and care about ourselves and really like to take photos of ourselves I don't see that changing now the world-facing stuff has always been hard on mobile you're you know you probably know this it's like you talked about the waving the phone I call it doing the sixth off dance it's like okay cool like oh and you've got this like sort of you know if we really want to put the marketing hyperbole on it we can say the Magic Window experience but it's a freaking tiny window right yeah the fact that that window is now on your face and that it is this crazy pass-through device there was this headset company that Apple acquired called vervana v-r-v-a-n-a that had a similar demo like just really high quality pass-through combined with sort of like the updates they've got to their like mapping like uh uh you know like what is it called I forgot room room capture API and their object capture apis just like it's gonna be it's gonna turn the world into a much more Immaculate canvas indoors so I think what's gonna happen is like people are gonna love creating world-facing room scale experiences and if you have those same shops in AR I absolutely think you should be taking Apple seriously like maybe not for Gen 1 because like maybe they sell like millions of units maybe tens of millions of units I'd be surprised if it's more than that and so that's not exactly a massive you know addressable Market to you know make a bunch of money but again we're talking about that's the iPhone Pro we'll get the iPhone and the iPhone SE eventually so you should get into this ecosystem for sure but I kind of see like self-expressive AR especially given you have the headset and yes there's ways to use neural avatars to like use the eye tracking data to like kind of you know under mask you I love that feature by the way like all the rumors were right it's like I made this laundry list of uh of rumors with actually uh another person is Vancouver based Tobias Chen and like all the rumors were absolutely freaking on point so I think there's going to be a place for social media distribution with like you know uh you know front-facing like kind of AR and then the world facing stuff I think we'll find a nice home at a place here um you know in in the in the Apple ecosystem we may even see a throwback to remember when AR XR activations were super hot everyone was like oh yeah let's go do play spaced you know this thing with uh there was like the uh what's the company Disney had um totally blanking on uh on the they did a bunch of Star Wars IP where you could like you know basically experience this in a theme park format I think we may see a Resurgence of that sort of these activations built around these headsets but yeah that's how I think it's going to pan out it'll be just like a new canvas uh it's not going to impact perhaps the self-expressive like gosh what are the most popular filters on Tick Tock it's like the green screen filter yeah everyone wants to just screen snap is the one that turns you into an anime yeah snap is usually like the original character yeah yeah actually yeah because when we were making AR experiences we would spend so long to making to make World experiences you know spend days and the amount of impression and Views it got it was incomparable to those that we made in a day it was a face filter so so it's like the effort and the impression about what you said was right like people really like the self-expression more than our experience the self-expression is almost invariably the bear the better filter when it comes to Brand exposure and that's mainly because like you mentioned self-expression um what do you think about the whole Unity versus unreal things so clearly we missed out on this for some reason we didn't [ __ ] bring it up in our video but afterwards I was like holy [ __ ] I remember apple and unreal had beef over you know fortnite at one point what do you do yeah yeah still do that court case still on I think yeah it's never going to be resolved the way they started it the whole thing but for anybody that doesn't know essentially Apple does take a 30 cut I believe on everything in the App Store and epic games had issues with that specifically with the way they were working with the you know apples and fortnite being the game that makes a shitload of money I I assume that's that's where the problem started but I don't know the full details of how the court case went I do know that I actually there was nine out of ten cases Apple was not guilty and one they were apparently did just a little bit of research on that but generally speaking what do you think about their choice with going for Unity and how do you think that's going to play out do you think Apple made the ride move well are you team unity or team unreal first of all honestly both I I kind of lean I I lean a little bit more unreal to be very honest with you just because I love the photo realism and like even with all the geospatial data like it's just so much for easy and fun to get that stuff looking good in uh in Unreal Engine and this the whole virtual production angle that they leaned into so I've always been a fan of unreal I think it's it's it's it's not a surprise to anyone to know that like a vast majority of games that are mobile based on on any app store are going to be Unity based right so it's very very popular amongst the apps that are actually making direct revenue for themselves and obviously giving that juicy 30 cut to Apple and so like so yeah I thought that was the elephant in the room like that's one of the things I definitely wrote about is like look it's it's kind of did did unreal shoot themselves in the foot like or epic shoot themselves in the foot with this apple debacle I don't know I think like we'll see how that plays out maybe you'll still be able to develop for uh the division Pro with unreal I suspect there will be a lot of AAA developers there like yo my game's based on unreal like like I need to have access to the low level functionality that they seem to be giving Unity so it does seem Unity got a pretty sweet deal like whatever the reality Os or whatever the new term for It xros Whatever It Is it does allow you like Unity files are treated at this like this lower level primitive so you can get access to a lot of the system functionality and build richer experiences essentially right so I think that's really good for Unity I think it probably also State save their stock price you know like uh so I I invested in in unity early and I was kind of disappointed with sort of how things were trending and yeah like it was funny after the the after the event it was like apple was down two percent Unity was up and it's like so I think it's going to be great for Unity developers it's obviously far more accessible to the average Creator and developer in my opinion than Unreal Engine maybe if you come from a 3D ish background yeah it's a little bit easy like oh yeah you use nuke or whatever maybe you like the new node-based workflow but c-sharp is easier than C plus it's it's harder to mess up in in unity than it is in in unreal so I think like look those are the two big game engines and it's funny how the industry always ends up with this oligopoly sort of format you end up like it's like what else is there it's like okay yeah you got Unity unreal and there's Godot who the hell uses like I don't like maybe some like you know Niche games use it but like yeah so I think like they're gonna have to support unreal to what level I don't know maybe unreal will be on the hook to build their own stuff that's what if I had to speculate I would assume that there's some sort of like a really nice deal between unity and unreal where they're working together to build this stuff out whereas for unreal and maybe more Harm's length but like unreal will still get enough pressure from developers to build on it so I wouldn't be worried is I guess what I'm saying that oh gosh like I actually had a Creator uh like Sam from so you know works on so crispy media he did the the uh uh the video the switch game video for Mr Beast yeah so that dude was like yo man like I've got all this stuff in unreal am I screwed it's like no I don't think so it just might be a little bit harder to Port so yeah all these like you know Frenemy relationships as I like to call them you know these companies are friends in one way enemies and other it's like what a tangled web we weave you know so you know I who knows how that'll play out but it does certainly seem like Unity got the better deal right now 100 he nailed it though you nailed it like the reason why a lot of people love unreal is just the Beauty and the photo realism that you can achieve the real time you got the nanite the Lumen with unity it's just like it's easier to develop and it's it's more versatile and so we are not at all saying Unity was the wrong choice definitely is a fascinating choice to go with it's just we as unreal people would have expected people not real people we would have loved to see unreal there though but unfortunately we didn't I did see a screenshot of Anil having the the plug-in for Vision Pro um so they did you're definitely going to be able to develop foreign you're probably not going to get as much support as Unity will be getting along the way which is I think Unity will have a better Pipeline and a workflow so it's much easier to do which is always a great thing to have yeah you're trying to develop apps and and things like that but I know what's also funny related to this that you'll get a kick out of this is like you know did you see the uh uh freaking the dinosaur content that was showcased yeah with the reality yeah it was one of the main demos and this is obviously like you know the who's who of like you know making these amazing Amaze John Favreau at all making this stuff the Hans Zimmer soundtrack I can't wait to watch that full length but yeah with Prehistoric Planet 2 but apparently the dinosaurs were rendered in unity and I was like wait that's weird like isn't John like a big unreal guy give it all the other movies that they've worked on and I started digging into it apparently for Lion King they had Unity wow that's like the virtual production thing on set yeah maybe that was just like they built some proprietary pipeline there's a company magnopus that Ben Grossman Academy Award winner that was involved there so yeah it's interesting that those dinosaur dinosaurs are like freaking in in unity and also like uh uh who's the who's the gentleman that directed District nine um I remember that the the the movie right that like super old a couple of years ago right Neil Blom cam he did a bunch of stuff with unity too that was more cinematic so I guess you can like what a couple of my like Unity purists were telling me no you can get some really photorealistic results from you know Unity as well have you checked X Y and Z out and so I guess you can do that but like ah let's be honest it's easier and unreal right now anyway so yeah well yeah I agree with that since we are on this topic I want to talk to you about the recent up session that we have found so we discovered Nerfs and we discovered that they could bring Nerfs into Unreal Engine five and that has been a very costly hobby so we had to buy a drone and scan environments and and this has really changed and like ever since whoever we talked to especially filmmakers directors people even people from Hollywood we are telling them that this is going to change the filmmaking industry this is going to change the production industry the reason is with a small drone a DJI Pro 3 Mini Pro 3 which is like not not that it's not you know not that expensive not that you know not the highest model we could scan environments that are photorealistic bring into Unreal Engine 5 add some filters and you know color grading we could get an environment that it would take weeks maybe even months to build in just a matter of minutes so it's been crazy how things can change with nerves and you know Unreal Engine and since you have worked a lot on nerves and I've seen Billy you've been doing videos but what do you think like if you are at this stage now what do you what do you think is going to happen in the future and I'm I'm so excited about neural Radiance fields and just generally how I think of it is like reality capture right because this technology isn't new right like photogrammetry has been a thing since the 1800s to be honest like this is like people were flying putting cameras film cameras on kites and doing like manual triangulation math to like you know create some real especially in geospatial and all this other stuff obviously I think like we saw this wave the first wave of what you just described happened with like games like Battlefront and you know Star Wars Battlefront and Call of Duty were to your points like would you want to create these like sort of complex Virtual Worlds you know it was easier for people to go like you know for Battlefront they went to all these like locations where the Star Wars movies were filmed right like and or like all the forest scenes and all that stuff and just scan the crap out of everything so we had this sort of like perfect Confluence of like sensors getting better like mirrorless Sony cameras like a a7r series like 50 megapixels like oh my God like then that's been around for like six or seven years now and it's the same thing then like you saw this crap like this crop of uh sorry it's like alien I'll photography isn't perfect Freudian slip but but not to jump ahead of myself like these cameras got better so people could go like capture even for Call of Duty hey let's just go scan stuff in the freaking like in a non-descript parking lot in Los Angeles and use that to kid bash and populate the complexity in a scene right so like this idea of digitizing reality and then sort of kick bashing it started with photogrammetry and obviously the output of photogrammetry was just like in many ways closer to what engines could deal with because it's just a triangulated mesh but at the same time it had all these problems that Nerfs have today that I think will get solved also which is like you end up with this like crappy looking texture Atlas and you usually have to re-topologize it and blah blah blah and do all this stuff to it you started seeing that in like AAA games and then suddenly you saw like a bunch of these blender artists doing the same thing with photo scanning and projection mapping like I'm blanking on the gentleman's name right now but he created this like sci-fi like futuristic dystopian Tokyo World Ian yeah yeah and like couple years ago just using like reality capture or Azure soft meta shape combined with basic projection mapping of the plate to create these like set extensions and complete scenes that just like to me was like yo this took like wetter digital and ilm an army of people and like a solo artist can do this now I think Nerfs added on to photogram a tree by allowing us to capture a lot more of the complexity of reality that photogrammetry sucks with right like so photogrammetry uses multi-view stereo you compute these depth maps and like you know and then you do like all sorts of stuff like some sort of poisson meshing to create continuous surfaces but what do you do with Reflections and refractions what do you do with transparent surfaces what do you deal with like really thin ornate structures all these things would be like sort of failure cases for photogrammetry and like this this line of like essentially pseudo light field style research you know with you know Local Light field Fusion the Deep view paper and then the you know really really seminal Nerf paper sort of brought that like volumetric light field quality to reality capture at some semblances scale right and I think like really the moment of scale happened last year when it's like I remember in January I was playing with narifs and like you have to use MIP Nerf and it like took like 12 hours to train on a TPU and then Nvidia came out with instant NGP and suddenly all my freaking home home machine you could train a freaking Nerf in like a minute like that just whoa like blew my mind and then Luma you can do it on your phone right and hit a button it's doing it in the cloud and hit another button you got a video and so I think like the ability for nerves right now to capture more of the complexity of a of a set like is really good for all the use cases you brought up right if you want a virtual set digitized Nerfs are the way to do it right now maybe you still capture the photogrammetry so you have like continuous surfaces or whatever you derive it from the Nerf you know kind of scan through various techniques or just like exporting it out of luma as like a obj or POI or whatever but there's still some stuff where it falls short right so to your answer to where do I see this going I think it's all going to be about Dynamic Nerfs right like so right now we're really really good at static capture let's assume the world is static that's kind of how photogrammetry works best and now you're seeing that with Nerfs you're seeing some really cool things happening with what if you start imbuing semantic understanding into each voxel so you can classify each voxel so if you wanted to just type a text prompt to delete a bunch of crap from your scene boom you can do that and then you use generative models to sort of fill in the holes right so I think the sort of where we're going we're seeing this beautiful unification and you'll see a bunch of papers like this at cvpr which is coming up in in in Canada actually yeah you're gonna be there if you're not no I'm not unfortunately I'm going to be stuck working on a maven course but I'll be at cigarf so if you'll be at siggraph in La I'll see you there but you'll see a bunch of these Nerf papers around sort of 4D Nerfs but also about how do you take like incomplete captures of the world and sort of fill in the holes yourself right like you mentioned it's an expensive hobby what if you could just walk around at ground level with the capture that you need because you've got the micro detail and while the macro detail already exists you know in data sets like Google Maps has or Apple Maps has or Microsoft has or a myriad of other aerial imagery providers and so then like could we auto complete your scan with a model that sort of seen the diversity of everything that's in stable diffusion for example but has also imaged the world from every possible angle so those are the directions it's going and and like like really for all the stuff you and I care about too like it's like people places and things we need all three right so we're doing really good on places and objects we need people so I think this line of 4D Nerf research on like how do you capture a full body volume of somebody and then animate it like not doing the meta human thing which is like this explicitly modeled thing that you're driving but sort of as this neural rep presentation is also very very exciting to me so I think we'll keep seeing people push on that on those fronts of like just like how do you start doing 4D captures of like video essentially and then how do you start making it easy to auto complete stuff edit and transform these Nerfs in sort of a native format and then finally how the hell do we deal with humans right and maybe what Apple showed is a hint to that like they what whatever Avatar they made looked very neural generated didn't look like an explicit volume it's similar to some of the research that uh meta has with something called Kodak avatars so yeah those those are my pontifications on like sort of reality capture and the nerve era and sort of where stuff is going it's going to be it's gonna be pretty exciting for sure I'm gonna clip this and send it to Luma and I would say this is our feature request for the upcoming six months dude why you just described is my wet dream like I'm not even joking just just imagine because right now Luma AI does have the capability of recreating your environment and essentially what it does is instead of cropping out exactly what you scan it adds like a globe sort of look to it as you bring it into Unreal Engine 5 which makes it look less janky right at the edges of your of your scan which it's fair because you didn't even scan that part of the sky if you were outside but it of course doesn't really do a generative fill it's not actually creating clouds or Sky based on your scan so now just imagining when that happens it's gonna be so fun and this is the best part like we're always looking at people complaining about how AI is taking away stuff and like should I learn sculpting because like sculpting might go away should I learn this this might go away and we're like the one thing I know that is not going away is digitize digitizing the physical like you mentioned like we're always gonna have this physical world at least in our lifetime I assume and this is like a world war three or something but we're always gonna have this physical positive thoughts yeah just saying we're always going to have this physical world and it's always going to be a human passion to go out hiking to go out running and when you do just grab your camera with you your phone that's going to be enough and you can essentially capture anything bring it back home and turn it into a magical world by using generative feel and and all of that which is what excites me because like no matter how much the technology improves and how good they get you can still do this and people will still love it because it's based off the real world and the real world is extremely complex and the patterns that exist in nature is extremely complex to try to model again so you have that plus Ai and you can create fascinating things which is why majority of our content right now is Nerf related because we genuinely believe this is gonna be huge in music videos in Virtual production in film you know in in in in a lot of different use cases and so we're like heavily invested in like using these Technologies so far we're only using blue my AI but you you use nvidia's instant instant Nerf is that is that right most of the time honestly okay yeah I would say like the uh you can like so instant NGP is awesome Nerf studio is awesome too although I don't know what the future of Nerf Studios development looks like I think there may be some uh it may continue as an open source tool um and and perhaps get funded by a couple of players in the space but dude I mean like it's the way I think about it is just like a tool for the job right like the Luma excels at is just like I love their web UI it's so clean like the keyframe editor I still find it if I'm making a fly through animation I still love the instant NGP one more so often how I like to use it is like instant NGP is the will it Nerf question because I can create a Nerf in a minute and it tells me and then when I know I want this thing and I want to do a bunch of artifacts and Export it in different places then I love to use Luma if I'm out and about capturing something with my phone Luma all the way at the same time like if you want to go get the highest quality Nerf output you really want to get like MIP Nerf 316 get rid of all the floaters and if you notice like Luma has this like meshy quality to it it looks a little bit more solid at times than uh instant NGP which has sort of this sort of hazy like kind of more volumetric quality to it I've tried to ask I'm in I'm like is it like some hybrid representation he's like no it's a native Nerf like all right but uh like yeah so I think like they're all freaking awesome and uh yes I see like Luma is just such a great accessible way to play with this stuff because you don't have to like you know just deal with command line and stuff you know which not a lot of people don't want to do you you just posted a video I think was a few weeks ago a few days ago was it an Indian I think was it a temple that you changed into a Minecraft that was a couple months yeah I think that was back in March no but there were two version I think one was six months ago when you read it so so I I want to know the the really the the recent version what was your pipeline to do all of this so you captured the environment first was it with your GoPro or like how did you do that can you walk us through that because it looked really yeah it looked really sick for sure yeah it's a pretty simple workflow it's like vacation footage GoPro just sort of at like chest length walking through on a path um nothing too fancy to be honest and then I created a Nerf that followed that exact sort of trajectory and so then there's two ways you can reskin it like uh the last video that you saw I was just doing basic stable diffusion image to image and so you know this is before control net came out I don't know how deep you all have gone into control Nets um I haven't I have a news control net you might actually love it especially if you're doing Nerfs like I think you you should make that a part of your workflow it's like uh so image to image is chaotic it's like it basically takes a takes your image as a basis you know so how diffusion models work is they denoise your image so when you provide an image it kind of blurs the image approximately and then sort of uses that as a basis to refine and then adds all this detail but the problem is there's no temporal consistency it'll just like every image is unique the way it blurs and denoises every image can be different even if you try to like solidify your seed and change some of the variables it'll look like this jittery sort of fever dream mess which a lot of people love now control net based tools like kyber AI is a really good one if you want to just experiment with this stuff in a web UI or if you just use the automatic 1111 tools I use both they're both great one give you one just like we're talking about Luma versus like instant NGP or Nerf Studio once one gives you more control and takes a little bit longer the other one's good for like let me just throw something at this yeah and get something back so you make the video you make the fly through with Luma you can export off the depth map so you know this right like yeah same video and then if you throw it into these tools and you don't always need the depth map too you can just take the video control net basically either if you give it a depth map it'll use the depth map that you provided or it'll hallucinate and approximate a depth map using Midas depth estimation which is this like really awesome Intel based depth estimator that creates pretty amazing results and then we'll use that depth map and then Edge map so it'll extract Edge maps from the video too to sort of guide the image diffusion process the image generation process and so that way you get far more temporal coherence and like that combination of like depth map to tell the model hey from this camera Viewpoint here's all the structure in the scene right like that's awesome but then you have flat surfaces right there's no depth detail on where the windows are and the edges and the crevices that all gets encapsulated by the like candy or HED HED map whatever you use they're different control net methods and so when you put into something like kyber it does depth it does canny and basically get lets you give a text prompt to reskin that video and it's like so much fun it's like almost feels like re-skinning a reality and if you want to take that a step further you know kind of off the Nerf thing we were just talking about I pulled this up while you were talking about what you what your wet dream about Nerfs is is uh if you look at this tweet you can check it out later it's the same thing applied to Nerfs in this iterative fashion so it's not just applied to the 2D output maybe in a 3D aware way it's like applied to the Nerf volume itself sort of like a way of brute forcing uh gradually like incrementally restyling all the input images and then making a Nerf from that so if you want to say make it snowing you can do that and it like you get a native Nerf at the end of it so a little bit brute force and compute heavy but works but to go back to that yeah it's a control yeah go for it sorry did you use a text prompt to basically change the images or use an image reference or you can do both you could do both so if you use gen 1 to do this stuff and it follows a similar process gen1 also creates a depth map and uses that for temporal coherence with some other things you I in gen 1 I like to provide uh image prompts a lot more yeah but in uh in kyber control Nets oh gosh just text prompts like it's just you know it's iterating and getting the right Minecraft one and it just looks magical it's like it's kind of wild to me and then uh I tried doing the reverse also it took like Minecraft video and turned it into reality which is also like really yeah it's like it worked really well like it's kind of surprising yeah it's on my Twitter check it out it's like it's like basically like and then I suddenly think of all the kids right like that are growing up today that are working in Minecraft like they know a 3D tool it's like Minecraft they literally build the worlds in it and some of these kids are crazy they're pulling in like Google Earth data and like creating entire cities but even if you know how to model with voxels you cannot like artificially upper as it you know to like yeah me you posted about the Minecraft builder then the metaverse architect yeah that's the one totally that's that's what happened yeah that's definitely that's gonna happen like I I you know Minecraft metaverse Builders too like like I saw some of the videos y'all are making with the like the Drone scan you did and getting the characters going in like that's the future like that is that is the future what is your first of all thank you for sharing your workflow pipeline what is your definition of the metaverse because clearly Mark Zuckerberg and and apple all these different you know companies and people they have different views even even Tim Sweeney has a different view of what the metaverse looks like what is your view of what the metaverse looks like yeah it's like oh my God this word is so loaded it's like the way I I think of it very simply I think of it like the spiritual successor to the 2D mobile internet we know today right it's it's it is it is essentially like it mirrors the physical world that we embody in just a far more natural fashion so I love the term that Apple has given to this which I've been using since like 2016 at least which is spatial Computing and spatial media right like it embodies the spaces and places we care about in a more like three-dimensional way but also us right like metahumans and avatars and so to me the metaverse is basically like you know everything since Xerox Park on to now even on the iPhone a mobile era has just been rectangles stacked rectangles on a screen and like obviously the world we inhabit every single day like is dimensional you see a doorknob you know how to open it you look at a person you want to shake their hand you know how to make eye contact how to like navigate a grocery store Architects have been sort of the original metaverse architects in a way right because they've been Conjuring up these dimensional spaces that we know how to navigate and get around in hey some malls harder to navigate than others sure but like they've been doing that so I just view the metaverse as like a spatial embodiment of computing right where Computing just reflects us back in this more dimensional spatial First fashion but it's not just spatial it's also language and I think that's where like I don't think arvr took off because the dream and the vision of creating more intuitive Computing was always there but we always knew we wanted some sort of like Jarvis style interface to talk to right a person and bodied agent that helps us with stuff and I think we're seeing that happening with this wave of like large language models and now these multimodal like large language models I think it's going to be very very exciting so AR is going to replace mobile I think VR as a form factor is going to replace desktop this sort of experience and you know it's it's all additive then as far as decentralization goes I think whether it's centralized or decentralized that's orthogonal in my mind to the metaverse definition I think the world we live in today is largely centralized around a few players I wouldn't be surprised if that continues to be the case I also wouldn't be surprised if we see a pretty large decentralized social network computing platform distribution platform emerge as well so it's my take on the so-called metaverse I love that we share we share sort of the same mindset when it comes to the meters definitely it has to get closer to the way we interact with the physical world today and and it's clearly not there yet and and which is why I believe so many people were thrown off by it because in in the web 3 space people were saying oh it's already here but I I still think we're still far away from it if if yeah so that that's even even have said is not even out yet they're like next year will release this headset so it's still gonna be far away but it's it's definitely going towards that direction and we see that as well I have by the way I can go another 30 minutes perfect amazing thank you for letting us know you have experiment you have been experimenting with a lot of new tools just the names that you know came out of this podcast I think I can list out 20 or 25 and then if I go onto your Twitter I can get another probably 10 or 20 150 yeah it's it's I mean we are trying to catch up with all these new tools and honestly sometimes we just can't we just can't try everything where do you get all this information like it's like drinking from the fire hose man like I it is it is I think this is a full-time job just to keep up to date with all the stuff that's happening learning and building stuff with it there's almost too much and I feel like I've only scratched the surface I feel like there's 20 of like the landscape that I can reasonably you know put my eyes and attention on and then there's so much happening just in the text llm space for example that I rely on other folks like me and so I would say like it's like it's like what we're doing right now it's conversations like this it's like Twitter spaces dare I say it's people's newsletters that are just helping me sort of keep up to date on all the other stuff because I think the magic is always at the intersection of these things but like look the fomo is also crazy high right like you talked about sort of like you alluded to this earlier with social media and like too much you know kind of this like sort of anxiety of like there being almost too much and I think that's only going to get worse from here right like it's it's uh that's maybe the carousel that you were alluding to it's like there's gonna be new things coming out every one to three months for the at least the next two to three years if not longer I truly believe that and then like when people even in like even reputable researchers say stuff like oh yeah at some point AI is going to be making AI I'm like oh my God we're so it's like it's like it's it's done like we almost need our own AI agents scouring the web based on our interests sort of giving us our you know like newsletter summary and so like I I think that's maybe how we'll make sense of this all but like we can't do it all so like again like the things I'm passionate about is like blending reality and Imagination I'm more passionate about visual creation not going as deep into all the Audio models that out there there are other people who are doing a really good job of that so they give me sort of the distillation so I gotta say my addiction has been Twitter in all of this like say what you will about the platform like I almost have like stopped posting Tick Tock videos like I need to get back on that and just like push my account over a million it's almost there and so like yeah we see that and so I got to do that but again like Tick Tock has been my obsession for the last like three months and that shows in my growth like three months ago I had like two or three thousand followers and now I think I'm at 35 ishk or something like that and so that's been fun and I want to keep pushing there one conquest that's always eluded me is Instagram by the way I just like I failed to do I actually often ask him it's like Kim how do you do it like what do I do it's just I'm trying it's like yo just like it's funny how different all these platforms are like what you mentioned about Twitter and Tick Tock and we have the exact obviously opposite problem here because our Instagram is going well whereas our Twitter is suffering yeah I mean suffering as in that'd be like it's just a different way to deliver the same value you're trying to deliver right like yeah so so it's really crazy you have to contextualize for every single social media platform when you are creating content for it or maybe an AI agent will do that for down the line right yeah you talk about what you want to talk about the AI agents one for Instagram and one for Twitter one for you know Tick Tock will package it up in the best way possible and send it out and that way it will work I love that I mean it's such a good point I think those are the aspects of AI that we underestimate we're like oh my God you know it's like the South Park you know everyone goes back to the our jobs are going to be taken but it's like these are the aspects of creation that are just like mundane drudgery of like taking that video chopping it up turning it into a thread all this stuff that's really just like transforming media and kind of pruning it down that I think you know hopefully creators mental health improve with that but I think there's even to that that could be potentially be a dark side right this is like where I gotta go back to Mr garyvee it's like you know so like Gary Visa you got to do 10 tick tocks a day right like it's like I'm like homie I do like one a week you know like like what do you mean like I can't even imagine like maybe you if you have a team of a bunch of people by the way I'm looking for editors so off the off thread if you know if anybody's watching and is looking for it you know who to call yeah yo exactly and you want to like speed run the AI creation stack I'll give you that you help me with some spicy edit so magic but to like the dark side of this could be if our efficiency as creators go up right like the expectation might be for creators like us to blow up on a platform we got to do 100 pieces of content a day because like sort of the floor Rises right it's it's not just the ceiling of like you know awesome content you can do that a studio can output the Rival like of of a Marvel and Marvel can set new standards I think the floor might start Rising too and I think I definitely worry about that and and like sort of we as creators need to make sure that we take advantage of automation to live a freaking better life than not just create another treadmill of our own design where we just slowly keep like increasing the speed increasing the speed and it's like you know the frog that got boiled if you want to boil a frog just do it very slowly it just happens and then next thing you know before you know it and uh yeah sorry you wanted to say yeah because we are in this topic I'm gonna reread one of uh bilobos tweet where you talked about the evolution of the content creation how YouTubers had to learn so many things you know light audio editing thumbnails thumbnails you know so many things that you know different so like when we create YouTube videos as far as editing DaVinci Resolve I'm making a thumbnail then I'm doing subtitles in cap cut you know like we are using multiple softwares to do that and then there is Tick Tock where you mentioned on your tweet that you can make everything in a single app music is there you know subtitle is their title is there you can even you know split screen green screen do whatever you want and you make few of them in a single day and now the next stage is generative AI how do you see this evolution is going to impact the content creator so we talked about consumption yes there will be a lot of it but it like you are a content creator VR content creator how is going to change for all of us with all these AI tools that are coming out yeah I mean I I think it's it's a really good prompt right like I think Tick Tock turned that sort of the Creator stack that you needed into this single application right and if you just combine it with cap cut like which has even more advanced stuff they almost I I love what the buy dance product teams are doing they've got this awesome pipeline there between the Advanced stuff from cap cut eventually graduates into Tick Tock like they almost test stuff there and then it pops in there and it's amazing right you're using resolve but still you're using like cap cut to do titling because it's easier you the Motion Graphics look better it looks more polished it's faster the transcription is amazing like so and it's also sorry to cut you off it's also one more thing it's about psychology as well so people are not used to like the way cap cut videos look like let's say there's a specific type way of doing subtitles like the psychology of me seeing those types of subtitles if they're made in cap cut like I I prefer to watch those videos sometimes is just one example but like people's psychology also changes and the way they consume and the the things they pay attention to but sorry to cut you off it's beautiful point and the preference has changed right like now you see a bunch of like the classic chopped video vertical with the yellow tax rate like that's become the aesthetic du jour and that'll evolve but to like how will this change the mobile creation I think the phone will turn into a studio in your pocket it already is in many ways it'll become even more of that right like so I think again it speaks volumes that the most popular effect on Tick Tock is the green screen filter like it is this flexible tool right it's a flexible tool for a Creator whether you're like a business savvy like one of a bunch of the AI creators like Rachel Woods you can like take a screenshot of something pinch yourself down now suddenly you're doing picture in picture in a presentation you can use it like that if you're a comedian you could be like going to Google image search and find a bunch of backgrounds and you create a virtual sets right like instead of doing a Nerf you just like go on a street view or you're going and grabbing a photo from like image search and that's your virtual set so this one primitive turns into these things that otherwise needed a studio or like chroma key and then you're doing all this other crap right like so I think that'll only get supercharged with generative stuff right like the ability to do this sort of video to video filter stuff that we're talking about granted some of the things we talked about have Nerf as an input but video is an input just as well works right like so I think like all that stuff is just going to get supercharged the ability to create virtual sets to create virtual humans to hold props and to also be able to like sort of create these more Dynamic virtual camera moves all with the 4D Nerf stuff that I mentioned is all on the horizon and so like I kind of view it like if you want to create like short form like 60 Minutes 60 second or less type content more and more people will be able to do that in an app just like I mentioned the script sort of opened up video editing for a new audience of creators I think like these like these effects these type of effects will get opened up to another audience or creators and like you know Tick Tock it's like these Trends sort of pop up and they take on a life of their own I remember when the Clone filter effect came out this is right when the raid Area 51 thing was happening and everyone's talking about World War III is going to start it was like raid Area 51 and then World War III it's like oh my God society and it's like everyone was like pretending to be like an army right like with their like multi-clone effect and using it as an effect right so you never know the ways people will use this stuff but the fact that platforms like Tick Tock and now YouTube's gonna and YouTube's publicly said they're investing in generative AI effects obviously meta is going to do it too we're just gonna see this kind of crazy feedback loop that doesn't exist or is much slower of a feedback loop in the way you and I create content where we're doing this thing we put it on you know it gets views and we figured out versus I just made the thing I put it on there and somebody's like I like this I want to use the same Primitives that you came up with and sort of remix it kind of playing Photoshop tennis if you will if you remember that trend on Reddit where you could post an image and people would additively add to it right so it's going to turn into full-blown Movie Studio and I think short form stuff more and more will be made natively inside these applications but at the same time given that they're such crazy like there's this arms race of generative AI happening I think we'll also see people applying sort of the virtual production model uh that sort of lives in parallel to that but I think in terms of product metrics of how most content will be created it'll probably just be an app because you'll have no reason to like you know like nobody's gonna who who uses like adobe's what is their like uh mobile editing app called Rush either you're in Da Vinci and Premiere or you're in capcat yeah that makes sense exactly right and so cap Cuts interesting because they're on desktop now too right so like yes so it's again that same tug of war sort of Confluence like people coming from two angles coming together but I'm excited to what a what a time to be a kid today with bold ideas like you know I look back at my like crappy Show reel with the like the crappy stuff like it's like literally close clothes drying on a freaking like string on the roof and like we're playing with lightsabers and this like shitty 3ds Max render of things going down now a kid today you got a 4k camera in your pocket you've got AI agents working at your best and you're basically able to like chop up up and do what required a mini YouTube team and you can do that with a phone in your pocket so what a time to be alive and that'll only get more exciting in the future I want to finish off by one last thing and I want this it's not necessarily just a question towards you but it's a question towards all three of us and perhaps everybody watching in the comments I'm gonna start by singing myself and answering it I want to talk about what are we most excited about with all of these Technologies and we did talk a little bit about Vision Pro and Nerf so I'm talking a bit you know maybe more than just that um personally what I'm excited about is something I think I read in the comments where you can use AI to train it essentially based on all the content as a content creator that you've uploaded and your audience and right before you upload a video you can test to see how that video will perform essentially based on the audience that you have in all your previous videos so I'm excited because sometimes we even think about thumbnails you're like we don't know which thumbnail works well if you have the AI we'll do like a testing and the AI will tell you based on your previous results and based on everybody else's thumbnails this is gonna work better this is going to have more click-through rate so I'm excited personally about that technology I think I want to see that coming soon what about you guys uh maybe you want to go first you know I'm I'm less excited about a technology and more about how they come together and the thing I'm excited about is is building on what you said to be honest it's like I alluded to this notion of us being able to author content at this higher level of abstraction with these agents with these tools with these models or these superpowers or whatever you want to want to call it I think to your point about being able to like create variations of content and a B test stuff like I was talking to the premier in After Effects PMS about exactly this where they were talking about mogurts if you are familiar with that feature in uh in After Effects where you can create templates of text like text Motion Graphics and then in Premiere or with a CSV file you can dynamically update the copy right if you think about anyone creating these shorts for like you know 30 different languages like how much of a pain in the ass it is to do that stuff manually or you're creating an ad creative that needs to go and like you know sort of hit five different markets for example there's a lot of manual work there and so when I think about authoring content at this higher level of abstraction I don't think text is it right like everyone's like oh text is the the be-all end-all I think us as visual creators know the value of having like a virtual track camera and being able to do stuff like that but I think using all those modalities we'll be able to describe a video kind of like we describe an HTML like a web page today so in like HTML there's this notion of like the document object model right like here's the title here's a bunch of the headings the subheadings here's a bunch of iframes with content embedded in it I'm envisioning a future where we're able to like author a Creator like let's say an advertisements or a short movie at that higher level of abstraction saying hey here are the like objects and spaces in a scene the locations here's like the the entities in a scene the humans the actors here's more information about them and then you can sort of like have this content be responsive to whatever you know sort of like you know objective you have if it's like hey I want to take the same ad creative you're talking about Nerfs and all this stuff right like let's say it's like a ad for a new freaking Hyundai car you've got this type of talent you got this backdrop you got the car you got this hero shot right now would be like an expensive rad re Alexa shoot in some La Warehouse right now imagine we authored this at this higher level of abstraction you did the mocap you've got all the pieces down you compose the scene graph right this document object model of the video and then I want to localize it to India boom swap out the background instead of Seoul South Korea it's freaking Mumbai Skyline swap out the talent with whoever you want this that and the other thing and I think like that ability for content to be responsive and adaptive I think is so exciting right and I think like when I think of things like that and explain this to a bunch of creators like there's a lot of people in the industry I talk to that just like I'm skeptical this just doesn't look good like doesn't have control or they're like [ __ ] this like this is like they're scraping art station all the ethical concerns too but when I when I start talking about things like this they're like holy crap it's not Zero Sum right like it's not like these are the mediums at our disposal and now machines can automate that away we're going to create these newer canvases for our imagine and we couldn't possibly author for this type of content if we didn't have things like Unreal Engine neural Radiance fuels these generative models these large language models these like text to voice generators all this stuff I think like in Confluence kind of like this Voltron sort of coming together will let us create really really amazing things so I'm excited to see how like the consumption media the way we author content changes and goes beyond just flat video into this like higher level of abstraction I love the I love that my personal excitement really translate to tools that speed up the process so I can focus more on the things that I love rather than spending time on the things that are time consuming and or their manual labor so for my case we'd be like I would love to see a tool that can edit videos for me you know cut the scenes and imagine you give your script as a text because what what we do for example we shoot the clip is like 50 Clips he's talking I'm talking I'll give him the script cut everything put them in order you know do the sound add some sound effect and I'm ready oh my God I would queue for that tool right now I promise you guys we can have another episode later on probably in the year and probably some of the stuff we just talked about today has already become a reality and we just look back and be like do you guys remember we talked about this it's gonna be really fun I definitely uh you know observe both of your sort of excitements what you guys are excited about and I would love for that to happen as well thank you so much again for your time it was such an honor and such a blast speaking with you and learning about the way you do things and I'm sure it's helped so many people watching as well so thank you is there anything that we need to know about what you're working on just before we go uh something that people can be looking forward to on your Twitter on Tick Tock you know your website I'll put I'll put every single Link in the description your YouTube your Twitter like Tick Tock everything so people can really follow you on because I personally followed you for quite some time now and the good thing is I learned a lot every single time that you know you post something it's it's a learning lesson for me so I think it would be for everybody else as well are you doing a newsletter at the moment I do I'm not very regular with it unfortunately yeah so like if I do I do have a Creative Tech digest when I have things to say I put them there but largely Twitter is where I'm basically openly sharing my workflows so if you're interested in either like the cool new stuff that's happening and then like how I make some of my content I put it right there uh if I were to plug one thing which I'm pretty bad at plugging I'm still getting used to that part of the influencer life is uh so I am a part of this like a group of AI influencers on Maven and we're creating our own AI courses so I have one coming out on multimodal AI creation it starts uh right after Fourth of July enrollment is open right now we've already got a great cohort so if you're interested in like honestly I've got most of the information available on my Discord on my sub stack and on my Twitter but if you want a little bit more accountability and if you want to a understand all the models at your disposal and then understand their pros and cons and sort of where they're going in the research trajectory and then how do you like put them together with very simple tools like cap cut to create some of the types of content we talked about basically 3D scanning reskinning reality creating virtual avatars making all those Balenciaga type of videos that have been going viral there is going to be this generative AI master class on multimodal creation so if you're interested in that just go look look it up multimodal creation on maven.com and uh yeah be awesome to see you there if not hit me up on Twitter DMS uh I try to check it at least once a week and uh always down to collaborate always down to share thoughts and thank you all also for reaching out and it was really awesome to be here no worries thank you so much we'll put the link to the course as well in the description for anyone's interested and thank you again to everybody watching we'll see you guys again next week with another guest so thank you until then ciao

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source