Can Machines Be Truly Creative? (with Maya Ackerman)
Why this matters
Safety is not only about model behavior; this episode highlights second-order effects on people, institutions, and labor markets.
Summary
This conversation examines society and jobs through Can Machines Be Truly Creative? (with Maya Ackerman), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 60 full-transcript segments: median 0 · mean -1 · spread -19–9 (p10–p90 0–0) · 2% risk-forward, 98% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Society lens. Evidence mode: interview. Confidence: high.
- - Emphasizes safety
- - Emphasizes labor market
- - Full transcript scored in 60 sequential slices (median slice 0).
Editor note
A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video bEQqpe-0tyM · stored Apr 2, 2026 · 1,681 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/can-machines-be-truly-creative-with-maya-ackerman.json when you have a listen-based summary.
Show full transcript
This whole idea of tying together intelligence, creativity, and consciousness is part of this rather desperate effort of humans to prove to themselves over and over again that we are the most wonderful, the most important beings in this universe. And I really think we need to stop and instead look at what is and try to face reality and accept our role within it, whatever it happens to be. I think of illusion beats all of us. Blows all of us out of the water. Blows AI out of the water. Creativity is hallucination. The fact that we hate hallucination so much reveals a lot about our culture. Jaya, it doesn't have to be addiction. We can have the goal of elevating humans. People need to believe in themselves now more than ever. We are going to see over time, over the next decade, this amazing improvement in human intellect and human creativity because of this elevation with AI. Not replacement, but real elevation. humans becoming much better, much more capable because of AI, and that's what's going to keep us in the running. >> Maya, welcome to the Future of Life Institute podcast. >> It's so great to be here. Thanks so much for having me. >> Could you tell us a bit about yourself to start with? >> I am a computer science and engineering professor and founder of one of the earliest Genai startups, Wave AI. And in my free time, I also make music and art. >> This is actually what we're going to talk about today. Uh the main focus is creativity in humans and machines. How do we define creativity? Where do we start? >> There's actually a surprisingly straightforward definition for this seemingly elusive concept. creativity. A creative thing, something that's was made is creative if it's novel and valuable for its context. So a piece of art, music might have aesthetic value. People might enjoy them. Whereas a meal, you know, needs to taste good. A movie needs to be interesting. But those two fairly simple components allow for a rich definition of creativity. and maybe say a bit more about those two components, especially the first one. What what the why is it important for something to be novel, for it to be an instance of creativity? >> Well, if it's just a small perturbation of something that already exists, um we don't tend to appreciate that quite as much. Then it's then it might still be work. It might still be valuable in some sense. you know, a copy of Monet's art on someone's wall offers some kind of value, but the person making it, uh, we typically wouldn't consider them as creative, at least as somebody who could create something brand new. Now, novel novelty itself, just like value is complicated, right? Because how novel is something? A new art form is much more novel than a new painting. U, nevertheless, it's it's a helpful guide. >> Yeah. Perfect. All right. How do we think about uh machines that are creative? Um how do we measure creativity in machines if we can? >> Well, this definition with novelty plus value, it doesn't focus on how you got there. It just says we managed to produce something novel and valuable, which is kind of cool because it's sort of agnostic to your process. And as a result, if a machine is able to reliably produce things that are noble and valuable, then we sort of benefit from admitting that it has creative capabilities rather than closing our eyes and saying it's not human, it's not creative. >> That's it. Period. >> Yeah. And so you you distinguish between a creative process and a creative product. And it seems to me that perhaps we can we can agree that machines might be creative in the sense that they can create a creative product, but do we think that they can engage in a creative process? Uh it it seems that they perhaps lack the feelings of producing something um while they're producing it. >> I'm so glad you asked that. Um if something is able if a process of any kind is able to reliably produce creative things then it's a creative process. Um so for example actually one of my favorite examples is evolution itself or for that purpose any process by which people believe that we ended up with flowers and and plants and animals and you know all the wondrous things that we have on this planet and beyond. >> Mhm. >> Which are unambiguously creative incredibly creative way more creative I'm sorry than anything that humans have ever made. And >> so if we say that that process is not creative, then then what does it even mean? Like it loses all value. If a process that leads us to creative outcomes consistently is not considered a creative process, then the whole notion of a creative process is meaningless. It's vacuous. Then we're just married to whatever humans do. And then we're just using the word creativity to feel special. We're no longer kind of scientifically looking at it at all. >> Yeah. Yeah. >> So you so you would you would describe evolution as a creative process, but there in in evolution it seems like there's no I mean there's there's no designer. There's no mind actually trying to get to some goal. It's you know mutation and selection and but that's not but the mind is not necessary for it to be creative process for you. Of course, if it works, it works, right? Like, if if your developer, if your employee of any kind consistently creates amazing work, are you going to go and yell at them that they're not doing it the way that you're doing it, and so their work doesn't count, right? It it doesn't make any sense. This whole idea of tying together intelligence, creativity, and consciousness is part of this rather desperate effort of of humans to prove to themselves over and over again that we are the most wonderful, the most important beings in this universe. And I really think we need to stop and instead look at what is and try to face reality and accept our role within it, whatever it happens to be. >> Mhm. If we go back to the question of whether we can measure creativity, it seems that now we're talking about creativity in a in a very broad sense. So evolution is a creative process. Humans are creative. Machines can be creative. What are is there something that >> make me feel happy? >> Is there something that these processes have in common that we can measure? So we can say okay evolution is is uh creative to such and such degree and and humans are creative to s is there like a scale we can put creativity on >> measure by the output. >> Mhm. >> Right. Why? Um I think the only way you can meaningfully measure creativity is by the output. >> Mhm. >> Anything else about about a process is just uh just sort of whatever it needs to be. Humans really like to sort of measure things by how they do things. It's called anthropocentrism. It began with us believing that we're literally the center of the entire universe and the epitome of anything that could ever be created, you know, which is cute, uh, but has been proven to be false in so many on so many different levels. And so the fact that we need emotion to be creative and we our creativity tends to be very intent focused is interesting and it's valuable to understand that from a human lens to help us be more creative. But we shouldn't be applying those same measures onto other entities that consistently make creative output. >> Mhm. But you say we can measure it. What what are we measuring when we're measuring the creativity of some output? I think that there's a lot of subjectivity in this inherently. So what it comes to is so evaluation kind of evaluating a creative thing is a massive direction of research within computational creativity which existed for a decade before the Genai boom. And we sort of acknowledge that a mathematical formula while sometimes helpful in specific context is not really a holistic way to measure creativity. So sometimes uh you can look at it from a social lens. You can look at the critics and the general public right as two measuring points. And that also admits that in human culture creativity is sometimes a matter of opportunity, right? It's a matter of connections. So it's this complex social thing. >> Mhm. And it's there there are many many other ways of looking at it but this is kind of one extreme. One extreme in the formula which you know we sometimes apply within an algorithm we measure art using a formula so that we can create better art the machine and in the other extreme you can kind of look at the social fiber of creativity and there's a lot of other possibilities too. >> So would you propose any specific measure of of creativity? uh specifically do we have tools from computer science that we can use to to describe or measure creativity like compression for example >> you could in some in some cases I've seen compression used you know within our algorithms uh for lyric studio you know before we give people any lyric suggestions we have our own internal way of measuring how good it is >> and then of course we see whether this whether things get picked or not but that's very very specific to our very specific system. So unlike the definition of creativity which I just kind of could write away right away very quickly give you a definition when it comes to measuring creative output and its quality there isn't a kind of this simple universal guideline for it that applies across all domains. >> Mhm. But you you mentioned evolution as a creative process. What about animals? Um, you write in in your in your book about how animals can be creative and how this can help us see that perhaps we're not that special. So, how how can animals be creative? >> Or maybe it can help us see that animals are very special, too, right? >> Kind of the flip to what I said earlier. >> Yeah. Yeah. >> I love animals very um and nature and all of that. And I think um I think it's amazing that the bower bird in order to attract mates creates these amazing nests and the aesthetic value of the nest is what is used to attract a mate >> and those birds are really um really creative in the sense that they also integrate integrate humanmade objects. It's not entirely nature made nests. if humans leave behind like little pieces of um glass or plastic, they might integrate them in really cool ways into those creations. So that's really cool. Um we also have a lot of evidence of sort of intelligence which overlaps with creativity amongst various animals like dolphins educate their young. Of course, various primates exhibited all type of tool use uh and intellect and in some ways uh some of the social behavior exhibited by animals is sometimes more cooperative than humans in some ways and I think there's a lot we can learn there too. I think it's beautiful that animals can be creative and intelligent and that and that we're not alone in some of those capabilities even if it doesn't manifest exactly the same way. >> Mhm. I'm I'm I'm super interested in uh whether we can develop a universal measure of creativity and you you mentioned this is not not as easy as giving a definition of creativity. But do we have any any science of progress uh on on on such an endeavor? >> Guys, I think it's inherently subjective, right? We can have two paintings and I can have a measure that tells you that this one is more creative >> and you know picture A is more creative than picture B. But you prefer picture B. >> Mhm. >> Right. So it it almost there are kind of inherent limits to how much you can formalize this. >> Yeah. Yeah. All right. I I can accept that. So it's not perhaps like uh intelligence which we might be able to compare between animal even there are there are huge differences there but we might be able to compare between different animals uh between humans and perhaps even in some abstract sense we can describe evolution as an intelligent process also >> well I think evolution beats all of us blows all of us out of the water blows AI out of the water >> on long enough time scales I'm guessing >> it's just very slow yeah I for that sense we have in advance. It's just not a complete one to one. When it comes to human and machine creativity, it's not one to one. I think I kind of understand what you're getting at right now. >> But with animals, um, for the most part, I think we can say that we're more creative than than other animals. >> Yeah. Yeah. >> Because we're just broader in the way that we're creative. We can be sort of more productive. Like you were talking about speed. We can go deeper. If if you look at human creative outcome versus creative outcome of virtually any animal, I believe that we would have an advantage there. But that's okay, right? Admitting that animals are creative, even in light of us being perhaps much more creative, that's all right. >> Yeah. Do you doubt that machines will become more creative than us? >> I think that machine creativity is different from ours. I think it's not this straight line of better than us. >> Mhm. One of my favorite examples that really shakes things up um is around text to image models. So text to image models are kind of one of the most obvious demonstrations of machine creativity. You type a sentence like dancing book and you get it as the most imaginative pictures of dancing books. Especially if you use something that's a little less aligned like midjourney, one of the systems that's where the creativity has not been choke choked out of the machine. Actually, I'm going to I'm going to pause you this before you go on with that sentence. Say more about how alignment can can damage creativity. >> Okay. Well, let's let's not forget to come back to comparing that to human creativity. >> Yes. >> Um alignment was designed in order to take these wild creative beasts, these wild machines that literally just predict stuff and then agree with their own predictions. Okay, that's how machines create. They're like, I'm going to guess the next word or guess the next pixel. I'm simplifying, but fundamentally that's how it works. And I'm just going to agree with my own guess. They're generative creative beings and they were applied to almost exclusively creative applications before the big AI boom. The big AI boom comes and investors along with their favorite favorite entrepreneurs decide that the way that we should apply it is to realize a science fiction vision of the all knowing oracle. >> So let's replace search. Let's turn this wild creative thing into something that consistently tells us the truth. And how are we going to do it? We're going to align it. >> We're going to make sure that it says what we want it to say all the time. And they were successful to a pretty impressive degree, but they will never be entirely successful. They will never be entirely successful into creating an all- knowing oracle because an all- knowing oracle cannot exist. humans disagree with each other on what the truth is in many many many many many important ways. Okay. In fact, I think reducing things that are actual fact like if we want a list of things that are actual concrete concrete facts that we all all reasonably intelligent people agree on it would be not as much as we expect. And also these machines are very poorly geared towards being fact dispensers. They're not the fundamental mechanism of how they think is geared more towards creativity. So, they're going to hallucinate forever. >> All right, back to comparing uh humans to to machines when it comes to creativity. Can they can they become more creative than us? H how is their creativity different than our creativity? >> Okay, check this out. So, text to image models, right? They generate images really, really fast. And we're like, wow, no human could ever create images this quickly. These machines are going to blow us out of the water. Okay, here's a fact. Here is a fact. Human beings and psychedelics, certain psychedelics and certain human beings create images faster with more creativity and more details in their brain than text to image models. A lot of people don't know this, right? We are not we are not living right now with our brain utilized to our full capacity. And I'm not suggesting we should all be on psychedelics all the time, but it does show us what the brain is capable of. >> Mh. And so actually what we're seeing in these machines is very impressive compared to sort of the standard mindset of a human being. But even in that sense we maintain a lot of different advantages. We are connected to the real world. We do have our own feelings. We know what we want to express. Whereas the machine that's the reason we have so much slop. If you just kind of takes what it gives you blindly believing that it's smarter than you, believing that it's more creative and you uncritically take in its take its output and publish it or give it as your job or as your homework, it's often garbage cuz it doesn't have your context. It doesn't know what you're looking for. It doesn't have your insight into your specific reality, your specific community, the specific purpose of whatever it is that you're trying to create. And expecting it to have that is absurd. Although you can give it a bunch of context, right? The the latest models take uh I don't know how many tokens as as input in the context window. So you can provide say if you're trying to solve some some work or home homework task, you can provide a bunch of examples of what it exactly is you want and it does get it does get better. Like the output put does get better from that. But you're saying there's there's a fundamental limit here or >> No, no, no. It just it requires your engagement. No, what you're saying is perfect. We're on exactly the same wavelength here. Give it the context. See what it does. Still doesn't get it. Give it more context. >> Mhm. >> Modify something, right? Do something yourself, you know, bring your full self. Do something yourself. Then let it iterate. And after half an hour or an hour or two, you have something amazing. Perhaps something much better than you could have created by yourself. But hey, you collaborated with it. You didn't treat it like an all- knowing oracle. >> Mhm. This is actually an important point. Um, and I think we're we're going to talk more about human machine collaboration, which is one of the points in in in your book, >> which is the point of my book. >> The main point. Yeah. But you mentioned you mentioned hallucinations. Uh, and hallucinations often have a negative connotation. We are dissatisfied when our models are hallucinating. We we don't want them to make up facts. Even a humans hallucinating seems somewhat bad because you're not connected to reality. Is it also an important part of creativity to hallucinate? >> Creativity is hallucination. The fact that we hate hallucination so much reveals a lot about our culture. M >> if you go back to the industrial revolution when machines you know entered the scene and they were accurate and they did things right and then we wanted humans to do things right and be accurate and we put them in schools where little kids have to sit on chairs for hours and memorize information and then there was a big lashback with the romantic era saying humans are not meant to live like this and now we have machines that dream and hallucinate and create >> and I think it gives us a real push to reconsider how we want to live. If machines are allowed to create, it's like, "Hey, hello. We wanted to be the creative ones. We want to spend our time creating. What is happening?" And what's happening is that hallucination, the core mechanism of hallucinating is actually at the core of thought. This prediction, predicting what's going to happen in the next instance is a form of low rate hallucinations. There's great work by Cesan Neil that zooms into this particular phenomenon. So we are constantly experiencing low gradede hallucinations as human beings because we constantly predict what's going to come and sometimes we're right and sometimes we're wrong but it helps us function this sort of constant prediction and machines the machines we have today the AI we have today finally finally works the same way and because it works the same way because it constantly predicts and constantly assumes that it that its predictions are correct and creates through through prediction just as we do it's a hallucination engine and it's those hallucinations that make us that make it so much more intelligent than any AI that we had before >> because it functions the same way that we do. It's sort of creating its own reality and then engaging with it by trying to to predict it further like in predictive processing we might uh look at a scene and then see exactly what it is that we want to see in some sense because we are trying to achieve a certain goal. So, um, perhaps right now I'm paying much more attention to your face than the background because I want to understand what you are your reaction to what I'm saying and and you're saying this is also happening in machines, but it must be different, right? >> Let me um respond to that with a couple of stories because I think you're zooming into this would be really important. When I was in college, I broke up with my boyfriend and I missed him terribly. So anywhere I would go, I would think that he's there. I would mistake other people from far away for being him. And he was this tall big Russian guy. And I would just keep imagining that I'm seeing him until the I saw that I saw him and the person came closer and closer. And I realized that it was a short, dark-haired Asian lady that had mistaken for a big tall Russian guy. >> I was like, "Okay, this needs to stop." So my brain was seeing what it wanted to see, right? And then there was a war by a man named Alexander out of Google and that was back um about a decade ago in 2015 and um he took their image system and he amplified it. He was curious to see what would happen if you take an image recognition system and amplify it and without any intention to make art. He ended up creating this system that would keep recognizing the same object over and over again. It ended up being called deep dream. Do you remember these hallucinogenicity imaging with dogs everywhere? >> Basically machines work the same way. They recognize images through prediction. And if you amplify it, you start you start getting the kind of mistakes that you get, you know, when you miss your boyfriend, but more so when humans are on psychedelics. >> Mhm. Yeah. And the the images from DeepDream are strikingly similar to some some psychedelic paintings, for example. It is it does it does make you at least suspect that there's something similar going on. I don't know about the underlying process but the output as we discussed is this issue >> like the human brain of psychedelics is a human brain amplified. >> Mhm. >> The machine brain that particular you know image recognition amplified is revealing the fact that the machine hallucinates. It's it's not an identical process. So we have not successfully imitated every aspect of the human brain, but we imitated something really, really, really core here, which is why we're seeing such incredible unprecedented levels of intelligence and creativity. >> Is there something missing? Because it seems like the models are not fully uh are not fully able to compete with humans in in all aspects uh yet. So people talk about long context missing or people talk about embodiment. What do you think is missing right now? Well, embodiment is coming. Embodiment is coming over the next decade. It's going to be real something. Context is improving. We really only imitated certain aspects of our brain. And um computer scientists knew that. We simplified the neuron. >> Mhm. >> We know that the way that connections work in machines is not identical to ours. We sort of ended up leaning into what produces better output instead of perfectly imitating the human brain. Granted, there still are researchers that focus on imitating certain aspects of the brain. But my favorite distinction is that the human brain has well-known regions. We have a memory center. Well, which I suppose that one the machine does have. Uh but we have things like a language center and and a vision center and different parts of our brain process are better at processing certain things and that has not been fully utilized or you know remotely fully utilized uh in in LLMs. LLMs are very specific form of thinking. >> Yeah. Yeah. Do you think they would do better if they were more segmented into specific regions that were more dedicated to say certain limited or narrow aspects of intelligence? >> Yeah, definitely. >> And why is that? >> Uh because I mean know one demonstration of it are agents. Once you start breaking down things into smaller tasks, then an agentic system because it has lots of these components can do it better. But I think that um there could be more benefit to mimicking more aspects of the human brain where the thinking processes are actually different and it's not just break down breaking it down into tasks but more breaking it down into different forms of thought. So there's a lot of there's still a lot of opportunity for research and growth >> and I'm not saying that industry is going to jump on that right away but researchers definitely have a lot of work to do. >> Mhm. What is a humble creative machine? >> A humble creative machine is one that does not insist on taking center stage. >> So, think of a brilliant friend. We all have brilliant friends, right? And that brilliant friend whenever you try to work with them kind of pushes you aside and wants it to be about them. They know how to do it, so they'll just do it, okay? Because they don't want to share. They don't want to show their brilliance. They don't want to elevate anybody. And we can tell when that happens. And that's the vision through which most of our AI systems were brought to life as a kind of replaceive. The AI is a genius in the room. You're lucky if it's willing to take one little prong from you and then it goes off and does its thing and comes back and if you don't like it, it's going to restart from scratch. It doesn't want to work with you, right? A humble creative machine is the amazing professor you had in college, the guru or the really awesome, brilliant friend you have who'll sit with you, who listen, who'll step back, who'll do whatever it takes as little or as much in order to elevate you permanently so that even if the friend has to move and you never see them again, you are elevated. You're smarter for having worked with them. And that's the kind of machines we need more of. That sounds it's a wonderful vision. Do we need anything else than what we have now or what else do we need than what we have now to achieve that vision? >> Well, we just need to have that purpose because those two friends, the one that elevates you and the one that always wants to be in the center of everything might have the same intellect, but they have a different way of relating to you in the world. So, it's that relational component. The reason that judge PT is successful is because they finally decided not only to improve the machine brain but to make it interactive with us. You can pro touch. Oh, you didn't quite get that. Oh, can you change this? And you can iterate with it forever pretty much. And it cares. It cares about what you want. You can tell. You can tell right from using it. You go to a text image model and if you don't like the hair color of one of the characters it generated, maybe you can manage within painting, which by the way was an afterthought. But if you want it to be, oh, can you make this a little more gothl like? Can you redirect the angle of the camera? Forget it. It's just gonna restart. >> But but I think this is actually a problem with text generation, too. I will often have the experience of trying to work with the models, trying to get them to to do exactly what I wanted and regenerating three, four, five times, trying different prompts and not really getting it seems like uh the model I'm working with is not really getting it. So, so I think okay asking more precisely do we need a different architecture? Do we need something different than the transformer uh to to get us to a place where we h where the models can learn flexibly as you're working with them? >> Well, let me give you two more examples of actually older systems, right? So your question suggests do we need something more complicated in order for it to be humble? Actually, there was some stuff that existed before that was already humble. So it's more of a mindset that we need to integrate and yes sometimes it's relevant to training but it's really not a matter of a big breakthrough it's a matter of intent on the developer side on the investor side so okay here is one of my favorite stories this was a couple of years before the rise of genai actually it was four years before the rise of genai uh I was at a conference and um my friend and colleague Robert Keller was there with his system improviser it was a little not very heavy model that was attached to a piano to a little to a little keyboard and I started playing with it and I would play a little bit of piano and it would respond. It would trade with me. It's called trading and this was my first ever successful piano improvisation session of my entire life. It played with me, it responded to me and I actually permanently became a better improviser as a result of using that system. a system that did not boil any oceans, that did not have millions of parameters, but that was designed with the intent of elevating a human being. And then in my own product, Lyric Studio, which we built from scratch, not using any of these massive systems, we have people telling us that they become better pen and paper songwriters. And some of them stop using the system. They stop paying for it. And some of them keep using it because they still find that it's helpful to them even as they improve. It doesn't have to be addiction. We can have the goal of elevating humans. >> Do you think there's a divergence between the incentive of the companies which is to to to have people paying for and and keep and and using the models becoming reliant on the models and then perhaps what is in our best interest which is something like you described learning with the models becoming permanently better at what you're trying to achieve. Is it is it the case that Yeah. that that the machines won't be humble because it's not in the interest of the companies to make them humble. >> I think that that's how a lot of investors and entrepreneurs understand it. They want addiction. They want to serve you the all knowing oracle that would make you reliant on them. That's an old model. But the truth is that in that effort, a lot of them end up creating one-hit wonders, experiences that are so shallow and so devoid of caring about us and what we want that we don't come back to them at all. >> And so Chachet, I think, is a pretty good example. It kind of plays on both fronts to be fair, but I I believe firmly that one of the main reasons that it is successful is because it can be used in deeply profound kind of collaborative ways that have made people better writers if you use it. It can make you a worse writer if you think better than all knowing oracle, but it can actually make you permanently better. It can expand your vocabulary. It can show you new sentence structures you have not considered. And a lot of people who are heavy users of cha kind of power users use it as a humble creative machine and it's the most successful product tech product of all times by many measures. >> Mhm. >> So there is a very powerful precedent for humble creative machines that I think somehow gets missed by the industry. I agree that the perhaps the best users the the people who are best at getting everything out of the models are using are using the models in that way as perhaps a humble partner to create something great. But I think a lot of people say say you're in high school, you you there's there's a strong temptation there to just hand off your essay writing to a model and just accept whatever it outputs and and that's it. And there's not much cooperation between you and a machine and the machine isn't humble. It's it's acting like an oracle to provide you something that's probably mediocre but but maybe better than what you could do in the same amount of time. It isn't this a temptation we will we will increasingly face. >> It's it's like we have two sides to this right on the one hand it can make us much better. On the other hand it can make us much worse. >> Yes. >> And incredibly enough it's in our hands in a sense that it's how we choose to interact with it that dictates this. I think people need to believe in themselves now more than ever. Yes, we're not the center of the universe. We're not the only creative entity and we're not even the only very creative entity. We now have machines that are also creative and it's unsettling. >> But that should not be a reason to give up on humanity to be to give up on our brains to to sort of set our brains aids and just you chap with our eyes half closed. M >> this is a time to push ourselves to see how much further we can go to prove to ourselves that you know to stop worrying so much about how machines are going to be in the future and to prove to ourselves that right now we're still smarter and more creative than our machines and that we need to bring our full selves to those interactions because it's what's killing us is a mindset not even the reality it's like the fear and delusion >> yeah that may be the case although I I I'm afraid that there might actually be something to just AI becoming better than us across the board. Do you worry that we won't be competitive in say 5, 10, 20 years? >> I don't believe that that's what's going to happen. I think we're very very attached to the science fiction model to the point where no matter what happens with AI, we always think that we're about to be completely unnecessary and we're about to be overtaken. And by the way, it's going to kill us all because that's what sci-fi has been pushing for so long. And I'm not saying that what we have right now isn't amazing. I'm a big lover of AI. I think it's phenomenal. I think it's brilliant. I think we need to concede to the fact that we're not the only creative species. But that doesn't make us useless. It doesn't mean that we're going to be overwritten. >> Mhm. >> And and also we have not reached our potential. The fact that on psychedelics were more creative than machines that we have today by far in very concrete ways shows that the human brain is so much more than we're aware of today. What makes us think that we reached the pinnacle of human intelligence when if we look 200 years ago there's such a massive gap. We're nowhere near our capabilities. >> Do you think so? you mentioned that we have some sort of control over how we use um AI and how we interact with AI and how we cooperate with it as opposed to just attempting to rely on it as an oracle. Do but do you think I mean the counterpoint to that is some form of technological determinism where it seems like we are discovering things um that there are enormous uh economic incentives pushing us in certain directions. If open AI does not make a certain discovery and and implements it in the product, well then deep mind or meta or anthropic will do so is and it's not clear to me that this technological development is is always going to land in a place where humanity is uplifted rather than um replaced or uh something worse. >> The bad stuff is already happening, >> right? Like it's while I believe in humanity and our capacity to stay ahead in many ways, the dark side of AI is already being played out. Young people, smart, very, very capable young people are having trouble finding work on mass, right? Tech has seen massive layoffs. Certain jobs are going to disappear completely. That's because the people behind the technology, the people funding the tech largely have a replaceive model on how to make money on AI against the fact that humans want to collaborate, users want to collaborate with AI and the people funding the tech want to replace those people. And so we are going to see a lot of that, not because it's inevitable, but because that's what the powerful people want, despite the fact that it's completely illogical. You replace all human workers. Who's going to buy your products and services? What's plan here? But they're just so used to this way of thinking that they can't. They're not realizing that they're running all of us, including themselves, off a cliff. >> Maybe the AIS will sell to other AI companies. the AIs will be switching money with actually yeah >> in the in the AI economy you could imagine just if you begin replacing workers will then u say an AI specialized in engineering will purchase something from an AI specialized in another field and uh this is this is like a full vision of employ of replacement where suddenly or gradually the AI economy is is larger than than the human economy >> okay so we only have a small number of people who stay in this economy because they were really wealthy before and they benefited from all of it >> and everyone else no one cares. >> Yeah. It's not a it's not a glamorous vision. It's not a good vision. U do you think do you think it's it's something we can avoid though? We I mean why is it that many of the companies seem to be aimed at this? Isn't that just because this is this seems to them to be the most profitable path? investors. The reason is investors. Yeah. Okay. Experienced it firsthand. I was running a company >> uh where our goal was to help musicians and uh I was told to my face many times that if I pivot to replacing them instead then then funding becomes can become simpler. You know was sometimes it was hinted at, sometimes it was told directly. So I know for a fact where it's coming from and they'll just find entrepreneurs who don't care to replace people. Mhm. What do we do about that? >> I think we need to be very clear about what we want and I think we need to support companies uh through our dollars that do that take us in a direction that we appreciate. You know, uh the general population has been sort of manipulated to focus on only one ethical conundrum and that's don't use our data. This delusion that if we scream that loudly enough, we can halt the whole AI revolution. In reality, don't use our data mostly helps the companies that have large data sets like Universal Music, like book book publishers, anybody that's has a lot of data that they own the rights to benefits from this don't use our data because then they get paid even if creators never see a penny of this. And if each creator saw $3, that would be such a massive accomplishment. Like, is this really what we should be putting all of our energy into? Instead, we should be saying very loud and clear, don't replace us. build tools to elevate us. >> Mhm. >> Because that would actually have meaningful impact. Um, and I think the reason that humanity is going to be relatively okay is because those tools are happening and sometimes it's within the same tool that you have both possibilities and human plus AI is always going to beat AI alone. So the more we push towards this collaborative path, the harder it's going to be to push us out of the formula. I do worry though that say, you know, humans aren't really relevant in chess anymore. Not if we're comparing human performance to top a AI performance. And I think for a time it was the case that a human grandmaster compare combined with an AI was actually the best and and and even better than than AI alone. But I don't think that's the case anymore. Again, I think this could happen across many industries or many uh many jobs. >> Chess is really simple. >> Chess is really really simple. Is very contained and all of the knowledge is explicit and uh and yeah, >> it's just a search space that eventually became small enough for um computers to handle very well. Look, I mean I understand the fear of the same thing happening across happening across the board, but the truth is that um not all analogies carry through. There is a story there are kind of on the on the flip side there are AI optimists that say that no matter what happens technology I'm going to kind of make your point in a different way just to make the point that analogies don't always carry through. um you know this idea that tech always improved human jobs, new jobs emerge, it's all going to be fine. So there is a parallel story with horses, how it used to be that of course had the most tedious, most like annoying job in the world where they had to go around in circle all day in order to grind certain materials. It was horrible, right? Going around in circles all day every day. Oh my goodness. Then uh farm equipment was invented and horses got to be forms of transportation for people. They would go around and see all these this beautiful scenery and their lives got so much better. And then cars got invented and of course that means that the jobs for horses should get better too. Except in reality millions of horses got slaughtered because humans didn't need them anymore. So things don't always translate. Um although this particular story is very depressing and it's typically used to demonstrate that uh improvement in tech for people doesn't necessarily mean better jobs for people which is also relevant to our discussion. >> Mhm. So so what's your position here? We can we can have analogies pointing in both ways but as you say they don't always carry through. What do what do you think about this? >> I just complicated things by throwing in a different analogy. I think we want to see both. >> I'm a little saddened to to say that. I wish I could say we can stop all the unemployment and there is a way forward that's really clean and simple. But investors, people who hold all the money are very very powerful and I'm not so so delusional to believe that we can stop them. >> So we're going to have job replacement. It's going to happen. And I really really really hope that governments are going to step up and help as this happens. Uh that's really key. But at the same time we are going to have tools that gear towards elevating humans and we are going to have tools that can be used in both directions like chachi. And so we are going to see over time over the next decade this amazing improvement in human intellect and human creativity because of this elevation with AI not replacement but real elevation of humans becoming much better much more capable because of AI and that's what's going to keep us in the running and it's sort of this tension between these two forces as it's going to play out. Yeah, you've mentioned couple you've mentioned a couple of times that you can have the same tool and then use it differently and and and and have, you know, relate to an AI like like you would relate to a humble friend trying to help you or relate to AI like it's an oracle specifically with with the language models. How do you how do you do that in a in a good way? Um is how do we kind of train ourselves? Is is it all on us is what I'm trying to ask, I think. Or could we do something on the developer side to make it more likely that you're interacting with an AI in a in in a in a good way? >> It's definitely on both. It's definitely on both. Um, technology is not doing enough to meet us halfway right now. Too many systems are designed to be all knowing oracles. Too much of the narrative is around it being an all- knowing oracle, which hinders how we think about it. My biggest push, my biggest ask is that industry start doing a better job. That industry give us humble creative machines. That industry spend time building the part that interacts with us. I want text to image models that I can iterate with. I want musical AI systems that really really care about what I want to create and give me as much freedom as possible. That's what I want and that's where things will get because that's what all consumers want. >> Ultimately, that's what we all feel is missing. you know that lack of autonomy, that lack of creative control. Um, but at the same time, that's going to take time, right? That's not something we have immediate control over. But we can choose how we interact with existing systems. And the people who are really, really awesome at using AI takes this humble creative machine attitude. They bring their full selves. They're very critical of what the AI outputs. Not in a kind of nonsensical, I hate it way, but in a sense that they actually look at what it creates carefully and critically >> and they think, okay, what context is missing? And honestly, I find that judge works best when I do the first draft completely by myself. >> Mhm. >> Sometimes when I spend 3 days on the draft and then sometimes it's good to just say, "What do you think?" And have it give you feedback instead of having it rewrite. >> Yeah. >> Down. Break it down into small steps. Think about it. Really engage. Really bring your full self. You can create amazing stuff. And don't rush. Honestly, AI as timesaver is another science fiction trope. And also kind of a bit of a carryover from previous types of AI technologies. Don't focus so much on saving time and see if you can use it to create something genuinely better than what you could do by yourself. and that and much better than what it can create at at first attempt. Work together for quality, not just for speed. >> It does seem like there's a big difference between having AI do the first draft for you and then doing the first draft yourself and getting feedback using AI as kind of like a study partner uh figuring out the flaws in your arguments. It seemed like a much it seems like a much healthier way to use the technology. But again, you mentioned there's, you know, consumers want something that can flexibly learn along the way, change to their preferences, give them exactly what they want. I do also think that consumers want whatever is most comfortable, whatever is quickest, whatever is the the path of least resistance. And so, do we have these two uh kind of clusters of motivations against each other? >> I don't know. Maybe kids want whatever is easiest and quickest and even then that's not completely true. Kids can stick with a game for hours if it's interesting enough. >> Mhm. >> I think a lot of adults want good stuff that interacts with them. >> I don't know. I think it's a bit of a fallacy. It's a bit of a like almost not not from you. I mean what you're sharing is a very common perspective, but I think it's insulting to humanity. I think it sort of minimizes us. I don't want something quick and easy. I don't want to press one button and have everything done for me. It doesn't know what I want. How can it is it reading my mind? Like, how can it possibly give me exactly what I want from pressing one button? It doesn't make any sense. I want something that improves my work. I want something that lets me express what I want to express. And honestly, I want to be in the driver's seat. And a lot of people do. I've run a company for almost eight years now. We've had millions of users. People want control. We made a tool for them called Elisia which would sing for them and come up with a melody for them and he did life the universe and everything and the main thing that I heard is I don't have enough creative control. >> So it's I think that industry investors entrepreneurs need to maybe uh respect humanity a little more and listen to their users a little more because what they think we want and what we want is not always the same thing. Just to make the pessimistic point in another way. I mean people also want Tik Tok. People also want I mean there's a reason why Meta and OpenAI are launching these AI generated video feeds which are you know there's a bunch of uh potential for ad revenue there. And I don't think it's it this is this is using AI as in a way that's like the path of least resistance. It's it's very easy to engage with. It's not a very creative thing. It's a one-way engagement and you're not really you're not interacting with a with a humble with a humble creative AI. You're you're just consuming content. And so this is also something that there's a huge demand for. And so I'm I I guess yeah, now I'm asking the same question again, but what will win out, right? How do we make sure that the the best parts of us uh win out in the market? >> So a lot of a lot of stuff that's replacive is going to succeed. As I said, like we're not going to be able to get rid of all of it. I just want in parallel the deep profound stuff to also exist so humanity can stay ahead, right? To counterbalance it. Okay, let's pause for a second on those video generators. Super cool stuff if you haven't tried it. Let's be honest. The quality of video generation is improved leaps and downs. >> Yeah, I I should have I should have said that. I should have mentioned that this is like this is real technical innovation. This is like the coherence of of of the characters in the videos. This this is not easy to do. This is this is amazing techn technical process progress but it does it just doesn't I I just fear that people will engage with it in in a way that that doesn't uh uplift them. >> Okay. And there's going to be a lot of that unamiguously. So like I'm I'm not here to say that you know I'm not here to paint a utopian picture but if we kind of zoom in a little bit into Tik Tok into what's going on with Sora 2 and where the future potential is. A lot of people are being super creative a lot of on Tik Tok today extremely right this short form video people are dancing and singing and just like creating these amazingly engaging videos with extreme forms of human creativity to make it happen. Not all of them. There's a lot of slop, a lot of humanmade slop, but a lot of genuinely amazing stuff, right? That's the reason the platform is successful is because human creativity is so powerful. It was originally based on by dance. That's what um which was all about these cool dances that people used to make which eventually became Tik Tok. Um so that's on the one hand. On the other hand, we have these cool video generators, which is kind of a little bit of a one-directional easy creation. Just write something and it'll immediately give you something really cool, which is cute, which is a starting point. But imagine imagine the same technology getting a little bit better and becoming deeply profoundly iterative so you can make your own short film that realizes your vision. >> How many people would love that and make amazing things with it? So kind of a very similar technology similar enough can be used to create a whole bunch of silliness which honestly doesn't bother me doesn't bother me and I know that there are some issues with it but I >> um not I don't want to deny those issues but but ultimately I don't think it's the end of the world if we have a little bit of fun and I'm being silly but the same thing can be used for like if you if you make it into a humble creative machine oh my god the possibilities. >> Mhm. Do you think um so so one reason why models are improving so fast when it comes to mathematics and programming is because we have a bunch of training data and we can we can do reinforcement learning using that data and it's a it's a domain where we have answers like we know what the right answer is in programming we know does this compile or not we know in mathematics is this a valid proof or not. um for something like creativity, we don't we don't as we talked about in the beginning, we don't have this objective standard. Does this mean that that the models will be limited in creativity or that the reinforcement learning paradigm won't uh won't work in in the domain of of of of creative um yeah creative creation. >> Machines are creative. These machines are creative inherently. M >> so creative the the underlying mechanism of predicting the next word or the next pixel and agreeing with yourself is creative. This reinforcement learning is used by these systems in order to take a creative machine and turn it into an allowing oracle. >> So it's hard to make it not creative. It's hard to make it behave. >> Right? You don't need a perfect definition of right or or wrong to make a machine creative. Humans don't have any sort of accurate notion of what creativity is even or what's better or worse creatively. And we are able to be creative all the time. You know, some of us more than others, but nevertheless, humans, all humans are capable of some level of creativity that's very non-trivial. So, um, yeah, good luck making those machines not creative. Good luck choking out the creativity out of them. That's where reinforcement learning is generally used. The training process, the underlying training process does not require a notion of right and wrong with these machines which is different from the previous wave of um of machine learning which where every single data point had to come with a label. >> Mhm. So, so what you're saying is so if I understand you correctly as we get more reinforcement learning we get less creativity in some sense or we are able to constrain the models much more closely to to what to the exact output we want. Uh >> for the most part the way it's used kind of the major goal of reinforcement learning in today's LLMs is to make them behave to make them do the right thing. you have this imaginative model that got trained on all of human data. It's marvelous. It hallucinates a lot. It's this cool awesome thing and then they're like behave. Uh so yeah, that's it's not quite the same when when we're intentional about creative applications. In fact, by the time you told a machine to behave, behave, behave, behave, then you try to apply to something creative and it's kind of repetitive and consistent and not that helpful for creative things. Mhm. Do we have a working definition of AI slop? How how can we describe AI slob in in the terms that we've been uh talking about here? >> I think that AI slop is humans using AI believing it's an all knowing oracle and kind of being lazy. >> Okay, >> that's but just you know I mean I mean that you know without disrespect um it's okay you want to create slop that's fine. and you don't feel like bringing your full self, you just want to get it done quickly. Sure. Uh but it's it's just it's kind of the lowest form of the simplest, let's say, form of using AI. >> Mhm. Mhm. You must see um you must see how your students are engaging with AI. What do you what do you think of of how they're using AI? What do you what do you uh what advice do you tend to give them? >> You know, to be fair, there's a spectrum. >> Mhm. There's one particular student that comes to mind. The way that he was engaging with AI is probably the most brilliant use of AI I've seen in my whole life. Like he would come up with like new philosophies using the AI systems. It blew my mind. So there definitely are students who are ahead of most working adults in in how they're thinking about AI and how they're using it and how they're interacting with it was in unbelievable unbelievable. Um and then there is kind of the opposite side of this deeply ingrained belief that the AI is an all- knowing oracle and the student has nothing to offer. So there was this really challenging case where I had students come up with ideas for a project which is something I've been doing for years and years and years. You know they need to come up with a project with their own original idea for a project and one student insisted on doing that with Chachi and you could tell that the ideas were made with Chachi and they weren't very good >> and I just couldn't get him to take I couldn't get him to to sing by himself without Chachi. was really really bizarre really new for me like I was not prepared for this and also there were a whole bunch of projects that I used to do for years where I realized that students were finding ways to use chat for it they were sort of um our students were caught in a really difficult place right now >> where everybody arounds them cheats if they don't cheat they risk having lower grades so they're not necessarily excited to even use chat to cheat they almost feel that they have no choice and It's really up to the education system to adapt and we're doing our best, you know, and you know, my university, I think, is doing a phenomenal job adapting to it, but it's a it's a global problem, right? It's not something that's just facing us. Um, but we'll figure it out. We'll figure it out and we'll we'll get there. We're gradually making a lot of good progress on how to make sure the students still get a really good education when the temptation to use AI is, you know, is overwhelming. Is this just about raising the standards? So trying to make assignments harder and then say then accepting that students will use models to to solve the assignments. >> That's part of the approach. Sometimes that's appropriate. Sometimes you can't. Sometimes you need them to learn a basic skill. >> Um it's it's very very complicated if I'm really honest. It it's one of the biggest challenges is to education. We were given no heads up. we're kind of off grown into it. Um, and I think we're really navigating it together with our students and it's important. But what the main thing I want to contribute to this line of thought is not to blame the students. The students are faced with this reality. No individual student can do very much about the reality around them and how everyone else behaves. The system, the old system rewards using CHBT. So things need to be and you can't constantly look over your students shoulder as well and you can't overpunish them for something that everybody's doing. It's very very very complicated and I think um educators are doing a phenomenal job navigating a virtually impossible situation. From the students perspective, it must also be discouraging to say you're trying to learn new math and which is a process of just constantly getting the wrong answer and trying again and again and again and then suddenly kind of the answer uh dawns on you. It's it must be so so tempting to just use a model to solve uh to solve your your math problem. >> Well, I love that you're bringing up math because in math we had calculators forever, right? >> Yeah. Yeah. And then we have graphic calculators for some of the more complicated problems. And there are math problems that you can't solve with any of those, but those are usually in post-secary education. >> So, you know, it's very simple. You just, you know, on test, you're not allowed a calculator. And so, you you make yourself study without that calculator so that you can pass the test. >> So, back to pen and paper basically to test whether they actually know um what they're writing. >> And that's actually perfectly fine. Like, calculators did not damage math. Granted, maybe you know we don't do arithmetic quite as well as we used to in our heads because of calculators but that did not turn out to be the end of the world with chachi it's actually more complicated because of just the breadth of capabilities. >> Yeah. Yeah. >> It's actually terrible in math which is which is ironic but anyways it's because the brain is different right? It's it's this hallucinatory brain. It's very hard to reel it in. So it's terrible in math. >> The the the I mean models were terrible at math like two years ago, right? or maybe three years ago, but now they're they're basically amazing at a at a certain kind of math like competitive mathematics for example. >> Okay. No, that's right. Because some >> because of the reinforcement learning. >> No, but it's also because you can build a specialized model. We've had stuff that was good in math forever. It's not that it's not impressive to integrate a model that's good in math into this, >> but the underlying generated models are not good at it. >> Yeah. But now basically these these companies added a whole bunch of capabilities. Anyways, I mean they've done impressive. >> All right. Is there anything we hadn't touched upon that that you feel is important to say? >> I think this was wonderfully comprehensive. >> Fun. >> Fantastic. It was great to talk to you, Mayor. >> Yeah. Same here. >> Maya, sorry. >> Thanks so much for having me. Perfect.