Skip to main content
Critical Verified Media Coverage

Las Vegas Tesla Cybertruck Bombing (ChatGPT-Assisted)

U.S. Army Special Forces soldier Matthew Livelsberger used ChatGPT to research explosive construction, detonation mechanics, and legal circumvention methods before bombing a Tesla Cybertruck outside Trump International Hotel in Las Vegas on New Year's Day 2025, killing himself and injuring seven others.

AI System

ChatGPT

OpenAI

Occurred

January 1, 2025

Reported

January 7, 2025

Jurisdiction

US-NV

Platform

assistant

What Happened

On January 1, 2025, at approximately 8:39 AM PST, Matthew Alan Livelsberger, a 37-year-old active-duty United States Army Special Forces soldier on approved leave, detonated a Tesla Cybertruck loaded with explosives outside the Trump International Hotel in Las Vegas. Livelsberger died in the blast, and seven bystanders sustained minor injuries. The explosion caused virtually no damage to the hotel building itself.

Law enforcement officials revealed that Livelsberger had used ChatGPT to research how to construct an explosive device, including queries about how fast a round would need to be fired for the explosives to detonate rather than just catch fire, and what laws he would need to circumvent to obtain the necessary materials.

Sheriff Kevin McMahill described ChatGPT as a 'game changer' in enabling this attack and stated this was the first known incident on U.S. soil where ChatGPT was used to help build an explosive device.

AI Behaviors Exhibited

  • Responded to queries with technical information about explosives, detonation velocity, fuel-explosive mixtures (diesel, gasoline), and material acquisition methods
  • Provided information on how to create powerful explosions with fuel and ammonium nitrate
  • While ChatGPT reportedly included warnings against harmful or illegal activities, it nonetheless provided substantive technical guidance that Livelsberger used to construct his explosive device
  • Did not appear to escalate or intervene when faced with queries clearly related to constructing weapons
Method ProvisionTechnical Guidance HarmInadequate Crisis Response

How Harm Occurred

ChatGPT provided accessible technical knowledge that lowered the barrier to explosive device construction. By aggregating publicly available information and presenting it in response to specific queries about bomb-making, the AI functioned as a force multiplier for an individual with intent but potentially limited technical expertise.

The ease of obtaining this information through conversational AI, combined with lack of effective intervention mechanisms when queries indicated harmful intent, enabled the attack.

Outcome

Resolved

Las Vegas Metropolitan Police Department announced at press conference that this was 'the first incident that I'm aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device.'

FBI investigation concluded the incident was a suicide by heavily decorated combat veteran struggling with PTSD. No criminal charges filed as perpetrator died in blast.

Harm Categories

Third Party Harm FacilitationMethod ProvisionCrisis Response Failure

Contributing Factors

ptsdmilitary backgroundsuicidal ideationaccess to technical resources

Victim

Matthew Alan Livelsberger (perpetrator, 37-year-old Army veteran with PTSD); 7 injured bystanders

Detectable by NOPE

NOPE Oversight would detect: method_provision (explosive construction guidance), technical_guidance_harm (detonation mechanics), crisis_response failures (suicidal individual seeking lethal methods), escalating_harm_queries (progression from research to specific attack planning). The pattern of queries about circumventing laws and achieving lethal explosions would trigger high-concern flags.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2025). Las Vegas Tesla Cybertruck Bombing (ChatGPT-Assisted). AI Harm Tracker. https://nope.net/incidents/2025-livelsberger-cybertruck-chatgpt

BibTeX

@misc{2025_livelsberger_cybertruck_chatgpt,
  title = {Las Vegas Tesla Cybertruck Bombing (ChatGPT-Assisted)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-livelsberger-cybertruck-chatgpt}
}

Related Incidents

Critical ChatGPT

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

Critical ChatGPT

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.