OpenAI's new video platform Sora 2 is redefining creativity while also raising concerns about deepfake misuse.
Show More Show Less View Video Transcript
0:00
Sam Altman here reporting from Straight Arrow News
0:02
Standing in for Kennedy Felton today, we're taking a closer look at the impact of Sora on the future of AI-generated media
0:08
Wait. Hold on, something's out, right? No, I knew it wasn't going to work
0:16
Okay, you got me. No, that wasn't the real Sam Altman, but it was a video I created using OpenAI's new Sora 2 app
0:30
The term deepfake first popped up back in 2017, coined by a Reddit user who used AI to swap celebrity faces into existing videos
0:41
Since then, generative AI has evolved fast from image apps like Midjourney to video creators like Sora, a new invite-only social platform where every clip is completely AI-generated
0:55
It surpassed 1 million downloads during its first week. People often tell me I'm a woman of many talents
1:04
I mean, did you see my recent appearance at Fashion Week? Pretty good, right
1:10
I had to redeem myself after a failed time travel experiment that somehow became a big budget movie
1:17
Hi, I just popped up out of nowhere. Don't know what this place is called. And yes, I even filmed a new cooking show last week
1:25
Well, Sora did. First things first, positive affirmations. Your magnificent marbling. You are worthy of a cast iron throne
1:32
You can create just about anything on the app. Even the animation Sora 2 creates looks professionally made
1:39
You're slowing down, Russ. Try it. All right, keep your hands where I can see them
1:44
I don't want to do this little guy. Turn around for me. I wasn't stealing, just playing the game. Now, sure, some of the so-called AI slop looks fake
1:51
but a recent poll found more than 50 of people aren confident they can detect whether something is made by AI or a human And while most of the AI especially on Sora 2 is made to be comical
2:05
the silly can sometimes have serious consequences. 100% zany juice. There was a very famous robocall that used the voice of Joe Biden
2:14
to try to convince voters in New Hampshire and Vermont not to go and vote
2:19
And that was manipulated by the other side to try to manipulate the election
2:25
Northern Illinois University professor David Gunkel says the biggest danger is deception and our laws aren't ready for it
2:32
Technology moves at light speed. Law and policy move at pen and paper speed
2:38
So we are always playing catch up. We are always trying to make existing laws fit novel circumstances and then trying to write new laws to cover unanticipated opportunities and challenges that these technologies make available to us
2:53
But Gunkel says generative AI isn't so much a turning point as it is an evolution of things that have been happening for years
3:01
In photography, for instance, you can capture something real or you can use lighting, angles and editing to create a reality that doesn't exist
3:11
AI, he says, is just the next step in that evolution. Another tool that blurs the line between what's real and what's made
3:18
Dozens of lawsuits are now testing those boundaries. In a recent win for authors, AI company Anthropic agreed to pay $3,000 for each of an estimated half million books used to train its models without permission
3:33
Even as apps try to limit abuse, their own rules are raising eyebrows
3:38
Sora 2's strict content filters block certain requests and have become a running joke on the app
3:43
Just put a nice little tree right over it. Huh? What is this? It won't let me. Let me paint
3:48
That's it. I'm coming for you, Sam Altman. Even I was flagged for violating the terms and conditions When I instructed Sora to insert my likeness into a workout class it flagged me for depictions of teens and children
4:04
I guess that's a compliment. Maybe I should do my next story on my skincare routine
4:09
Anyways, even OpenAI didn't realize how big of a problem these flagged requests would be
4:15
Tonight, in this very arena, my dream is to make freedom ring
4:19
But not everyone finds the app funny. You just saw artist Bob Ross in a video someone created
4:26
Other public figures like Robin Williams and Dr. Martin Luther King Jr. are being recreated
4:31
which isn't against the terms of service since deceased figures aren't protected
4:36
It's even prompting backlash from their families. Robin Williams' daughter, Zelda, posted on Instagram begging people to stop sending her AI videos of her father
4:45
She said in part, to watch the legacies of real people be condensed down to this
4:50
vaguely looks and sounds like them so that's enough, just so other people can churn out horrible TikTok slot puppeteering them, is maddening
4:59
Martin Luther King's daughter, Bernice King, echoed those concerns online, urging people to stop
5:05
There's less of a risk of creating scary, hyper-realistic, deepfakes that damage reputation
5:15
or cause disruption because they've implemented a lot of these safety guardrails to make it lean
5:24
funny. While safety guardrails have reduced the amount of inappropriate content, not everyone is
5:30
using the tech responsibly. Some users are pushing the limits by making inappropriate
5:35
or sexualized content, and that's becoming a whole other issue in itself, porn AI
5:41
Somebody was making all these videos of me and my clones like hanging out, which I thought was so funny at first
5:49
And then I see more and more videos and he trying to make my clones like make out And he does this with a lot of girls and you can read their prompts trying to get around these guardrails because you can get around anything It the internet
6:03
and to confuse an AI is not that hard. Currently, not only are there no federal regulations governing
6:11
generative AI, but it's unclear who would be held responsible if something harmful happens
6:17
Gunkel said we'll likely see a lawsuit over the next few years on the topic
6:21
Usually when you use a tool to do something, it is the user of the tool and not the tool or the manufacturer of the tool who is held accountable for the good or bad outcomes
6:32
But we are seeing this as a kind of moving target now
6:38
While the technology might be scary, AI is being weaved into our daily lives more and more
6:44
Gunko wants to remind people that this sort of pushback happens any time something new is introduced
6:49
Case in point, Socrates once came out against writing when it was first introduced because he thought it wouldn't be an effective means of communicating knowledge
6:58
We are only three years out from ChatGPT being released. That's really early on in a new technology
7:07
And if there is a lot of hyperbole on both sides of the debate, people are really excited about it, people are really afraid of it, that's par for the course
7:15
we've been here before. And I think it is a matter of some thoughtful response to this technology
7:22
some critical perspective and recognition that, you know, we've done this before and we can be
7:27
confident in the face of these new challenges. So for now, Sam Altman won't be filling in for me
7:34
No hard feelings, Kennedy. But if my AI twin starts reporting from Cabo, don't say I didn't warn you
7:45
With Straight Arrow News, I'm Kennedy Felton. For more on this story and others, visit san.com or download our mobile app
#Government
#news
#Spoofs & Satire


