Las Vegas authorities revealed on Tuesday that a decorated U.S. Army Special Forces soldier used ChatGPT to plan the Tesla Cybertruck bombing outside the Trump International Hotel on New Year’s Day. The incident marks a troubling first in the intersection of artificial intelligence and criminal activity.
Master Sgt. Matthew Livelsberger, the accused bomber, reportedly consulted the AI platform to gather details on explosive quantities, purchase fireworks, and acquire an untraceable phone, according to Kevin McMahill, sheriff of the Las Vegas Metropolitan Police Department. The sheriff described the use of AI in this incident as a “game changer.”
“We knew AI would change the game at some point,” McMahill said. “This is the first case on U.S. soil where ChatGPT was used to aid in constructing a device.”
OpenAI, the developer of ChatGPT, responded to the allegations by stating their commitment to ethical AI use. “Our models are designed to refuse harmful instructions and minimize harmful content,” a company spokesperson said, adding that the AI platform only provided publicly available information and issued warnings against harmful activities. OpenAI is cooperating with law enforcement as part of the ongoing investigation.
During a press conference, law enforcement officials presented new evidence, including video footage of Livelsberger pouring racing fuel onto the Cybertruck before the explosion. “You’ll see a trail of liquid falling from the back of the vehicle,” said Assistant Sheriff Dori Koren. Authorities believe the blast was likely triggered by the muzzle flash of a firearm Livelsberger used to take his own life.
A journal entry titled “Surveillance” found on Livelsberger’s phone detailed his activities leading up to the bombing. The entry outlined his firearm purchases, the rental of the Tesla Cybertruck, and even alternative plans to target Arizona’s Grand Canyon glass skywalk. Investigators are still piecing together why Livelsberger chose Las Vegas as the final target.
Law enforcement has also uncovered a six-page document they are reviewing with assistance from the Pentagon. Some of its content may be classified, according to officials. Investigators are further analyzing data retrieved from Livelsberger’s laptop, mobile phone, and smartwatch.
The FBI’s investigation has indicated that Livelsberger’s actions were likely driven by personal struggles, including post-traumatic stress disorder (PTSD), family issues, and unresolved grievances. An Army spokesperson confirmed that Livelsberger had received counseling through the military’s Preservation of the Force and Family program. Despite these challenges, Livelsberger had no prior criminal record and was not on the radar of law enforcement agencies.
“He was a decorated soldier with no known animosity toward President-elect Donald Trump,” McMahill emphasized, dismissing any political motives in the attack.
The incident has raised concerns about the misuse of AI technologies in criminal activities. Sheriff McMahill described ChatGPT’s role as a pivotal moment for law enforcement, highlighting the need for vigilance as AI becomes more accessible. “This case underscores the importance of staying ahead of potential threats in the digital age,” he said.
OpenAI reiterated their efforts to minimize harm through their technology but acknowledged the challenges of regulating how their tools are used. “We remain committed to working with law enforcement to ensure AI is used responsibly,” the company stated.
As the investigation unfolds, authorities are focusing on recovering additional information to piece together Livelsberger’s motives and planning process. The bombing, though tragic, has become a critical case study for law enforcement on the intersection of technology, mental health, and public safety.
Livelsberger’s case has not only highlighted the potential risks of AI misuse but also sparked broader discussions about the importance of mental health support for veterans and the need for stronger safeguards in emerging technologies.