An image shows a column of fire and billows of deep black smoke in the aftermath of a bomb blast in downtown Kiev. A news story purporting to be from CNN raises alarm about a rash of drone sightings causing community panic in Syracuse, New York.
Aside from their plausibility in changing and uncertain times, what these media reports have in common is that they’re totally fake: computer-generated products of the Synthetic Media Lab at Syracuse University.
The school doesn’t use these deepfakes to deceive the American public, as many hostile foreign actors seek to do; they use them to build tools that will help organizations including the U.S. military to distinguish truth from hoax. And they’re doing it with the help of some of the school’s Reserve Officer Training Corps cadets.
For the school, the work dates back to 2020, when the S.I. Newhouse School of Public Communications got an $830,958 subcontract agreement from DARPA to develop tools to combat the spread of fake news. Since then, the work has expanded and continued, though faculty members said they couldn’t provide many specifics on the scope of their work or the DoD entity they were supporting.
The AI technological revolution has increased the challenge and the need for solutions, said Jason Davis, a research professor at the school and co-principal investigator on the deepfakes effort.
“The AI moment happened, and we said, oh, okay, so we’re not just humans creating this kind of information,” he said. “There’s an automation and a scale that comes with AI and large language models and image generators that are just changing the entire landscape of how this can happen.
“So we skilled up, we rode that wild wave of, you know, large language model generation and synthetic AI generated content. How do we create content in an automated fashion? How do we interface as humans with those tools, and how do we sort of model that new threat as well as the traditional, conventional threat? And then we continued to grow our our capabilities from there.”
Now, according to co-principal investigator Regina Luttrell, a senior associate dean at Newhouse, the lab has more than 20 tools that aid deepfake detection and creation, to further study and identify the differences between synthetic and authentic media.
ROTC students joined the work around 2021, Davis said, after he reached out to the ROTC program and explained the work to them. The ROTC cohorts have always been small — the current one is just three students, faculty members said — but the work is not only meaningful, it’s useful in future careers. Two of the four students in the first cohort went on to get first-choice assignments in military cyber roles, Davis said.
“We were giving them the skills to go and be immediately useful and helpful for the DoD space that they were interested in, so there was a lot of benefit in it for them and for the DoD,” he said. “So we’re now on our second cohort of students, and we hope to continue to keep this going and grow it if we can.”
One of the current cadets, 20-year-old Glenn Miller, is spearheading a specific program focused on deepfake video detection, with the goal of developing tools to identify if someone is using a face swap or other identity masking mechanism on a video call.
Miller, who wants to enter the Army as an engineering officer, said he’s currently feeding his computer model a series of videos of himself in hopes of creating an effective “me detector” that can’t be fooled by digital fakes. It’s exacting work, and has to be done in concert with other priorities including ROTC commitments and classwork, but he can also already see how the work will have bearing on his future military career.
“I think [AI is] really just going to be crucial to our understanding and the military’s understanding of information,” Miller, a junior with a 2027 graduation date, said. “I think that’s where it’s going to be most crucial, how can it decipher, or how can people decipher information to be what it actually is, and determining the quality of that information is where AI is going to be very crucial going forward.”
Current agreements have the lab collaborating with the DoD for at least the next two years, faculty members said. The work now being prioritized focuses on building human confidence in the AI agents and tools they engage with — for example, developing checks and safeguards to make sure the chatbot assisting in online shopping needs isn’t a malicious agent trying to steal their data and scam them.
Having taskers from the military also helps keep the researchers from getting sidetracked “chasing butterflies” amid broad and competing demand signals, Davis said.
“This is sort of a sensory overload space for me some days, but with that critical mission in mind, and those high-priority targets coming from the DoD … it really gives us a fine point to put our focus on,” he said. “And that, actually, we’ll continue to use as a guidepost on what we should work on next.”




