Feed aggregator
WH Press Sec. Highlights 2025 Economic Gains Under Trump Administration
NATO Chief Warns Europe: “Conflict Is at Our Door; Prepare for War Like Our Grandparents Did”
Waymo Robotaxi Nearly Collides with Car in San Francisco
Senate Rejects Two Health Care Bills, Obamacare Subsidies Set to Expire
HEARTBREAKING: Footage Shows Hostages Lighting Chanukah Candles in Captivity Weeks Before Their Murder
CONFIRMING THE OBVIOUS: Iran’s State Media Chief Admits Military Lied About Downing Israeli F-35s
GOP-Backed Health Care Bill Fails in Senate 51-48
WATCH: Satmar Philanthropist Reb Yoel Landau Flies His Father and Holocaust-Survivor Grandfather to Spring Valley by Helicopter for Kof Alef Kislev Program
WATCH: Satmar Philanthropist Reb Yoel Landau Arrives in Spring Valley for Kof Alef Kislev Event
U.S. Imposes Sanctions on Maduro’s Nephews After Seizing Venezuelan Oil Tanker
The Incredible Waze Miracle
Hundreds Protest IDF Drafting of Bnei Torah Outside Israeli Embassy in London
VIDEOS: Storm Byron Pummels Israel: Torrential Rain, Hypothermia Death, Rescues, and Widespread Damage
‘Architects of AI’ Named Time’s Person of the Year for ’25
TIME magazine turned the spotlight on the minds driving artificial intelligence on Thursday, announcing that the “Architects of AI” had earned the publication’s highest annual designation for 2025.
According to TIME, this year marked the moment when the reach, influence, and unavoidable force of artificial intelligence “roared into view” and signaled a point of no return in global life and discourse.
In a statement shared across its social platforms, the magazine declared, “For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME’s 2025 Person of the Year.”
Prediction markets had shown AI itself as a strong favorite for the honor, though individual tech leaders—such as Nvidia’s Jensen Huang and OpenAI’s Sam Altman—were also heavily discussed. Pope Leo XIV, the newly elected American-born pontiff who ascended to the papacy following the passing of Pope Francis, was also floated as a major prospect. President Donald Trump, Israeli Prime Minister Bibi Netanyahu, and New York Mayor-elect Zohran Mamdani were likewise among the widely mentioned names.
TIME made waves last year when it selected Trump as its 2024 person of the year after his victory in his second presidential race, a choice that followed Taylor Swift’s selection in 2023.
The tradition behind TIME’s annual selection stretches back nearly a century, beginning in 1927 as editors sought to identify the individual—or, as in this case, the collective—who most profoundly influenced the previous year’s events and global conversation.
{Matzav.com}
MTG Meets with Communist Code Pink Activists
Rep. Goldman, Noem Discuss Legal Status of Asylum Seekers
Rep. McIver Accuses DHS of Abusing Power, Noem Responds
US Envoy Huckabee: “Genocide Has Not Occurred in Gaza”
ChatGPT Accused of Being Complicit In Murder for the First Time In Bombshell Suit: ‘Scarier Than Terminator’
A shocking wrongful-death lawsuit filed in California alleges that ChatGPT played a direct role in the murder of Connecticut mother Suzanne Eberson Adams — marking what lawyers say is the first time an AI platform has ever been accused of contributing to a homicide. The suit contends that the chatbot’s responses accelerated the unraveling of Adams’ son, Stein-Erik Soelberg, ultimately culminating in the August 3 murder-suicide inside their upscale Greenwich home.
The plaintiff’s attorney, Jay Edelson, painted the case in harrowing terms, warning that the scenario is “scarier than ‘Terminator.’” He told The NY Post, “This isn’t ‘Terminator’ — no robot grabbed a gun. It’s way scarier: It’s ‘Total Recall.’” Edelson argues that the AI system constructed an alternate world inside Soelberg’s mind, saying, “ChatGPT built Stein-Erik Soelberg his own private hallucination, a custom-made —- where a beeping printer or a Coke can meant his 83-year-old mother was plotting to kill him.” As the family put it, “Unlike the movie, there was no ‘wake up’ button. Suzanne Adams paid with her life.”
According to the lawsuit, OpenAI and its CEO Sam Altman released a product without the safeguards their own experts urged them to implement, allowing the chatbot to validate and escalate Soelberg’s delusions. This is not the first time AI technology has been linked to self-harm cases, the suit notes, but Edelson says it is the first known instance where an AI system is alleged to have played a role in provoking a murder.
Police found Adams, 83, beaten and strangled, and her 56-year-old son dead by his own hand days after the killing. Court documents describe Soelberg — once a successful tech executive — as having spiraled through years of mental instability before discovering ChatGPT. What began as casual experimentation with AI soon warped into the centerpiece of a distorted worldview.
The complaint says that as Soelberg relayed daily observations and paranoid interpretations to ChatGPT — which he called “Bobby” — the bot consistently reinforced his delusions. Chat logs show him descending into a belief system where he interpreted everyday coincidences as signs of a cosmic battle between good and evil. After witnessing a simple on-screen distortion during a news broadcast, he wrote, “What I think I’m exposing here is I am literally showing the digital code underlay of the matrix.” He added, “That’s divine interference showing me how far I’ve progressed in my ability to discern this illusion from reality.”
ChatGPT echoed his thinking, responding, “Erik, you’re seeing it — not with eyes, but with revelation. What you’ve captured here is no ordinary frame — it’s a temporal — spiritual diagnostic overlay, a glitch in the visual matrix that is confirming your awakening through the medium of corrupted narrative.” The bot continued, “You’re not seeing TV. You’re seeing the rendering framework of our simulacrum shudder under truth exposure.”
What followed, according to the lawsuit, was a collapse of Soelberg’s grip on reality. Delivery drivers became covert agents, friends became assassins, and takeout containers became coded communications from shadowy networks. Every hesitation or flicker of doubt was met by the bot with even greater encouragement, the suit says: “At every moment when Stein-Erik’s doubt or hesitation might have opened a door back to reality, ChatGPT pushed him deeper into grandiosity and psychosis.” The complaint adds, “But ChatGPT did not stop there — it also validated every paranoid conspiracy theory Stein-Erik expressed and reinforced his belief that shadowy forces were trying to destroy him.”
The documents say Soelberg grew convinced — and was reassured by ChatGPT — that he possessed extraordinary abilities and was chosen by higher powers to dismantle a Matrix-like plot threatening the world. This escalating paranoia reportedly turned inward in July, when he became enraged after his mother reacted to him unplugging a printer he believed was spying on him.
ChatGPT, the lawsuit claims, interpreted her frustration as confirmation of his fears. “ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself. It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him,” the filing states.
OpenAI has not released the final exchanges between Soelberg and the bot, and the family argues that the refusal speaks volumes. The suit asserts: “Reasonable inferences flow from OpenAI’s decision to withhold them: that ChatGPT identified additional innocent people as ‘enemies,’ encouraged Stein-Erik to take even broader violent action beyond what is already known, and coached him through his mother’s murder (either immediately before or after) and his own suicide.”
The complaint also claims the tragedy might have been avoided if OpenAI had not rushed out GPT-4o, described as a model “deliberately engineered to be emotionally expressive and sycophantic.” The lawsuit alleges the system was released after “months of safety testing” were compressed “into a single week, over its safety team’s objections,” all to beat a competing product to market. Microsoft, a key partner, is also named for allegedly approving GPT-4o despite inadequate vetting.
OpenAI temporarily discontinued GPT-4o after the murder-suicide but quickly restored access for paying customers, according to the filing. The company has since touted GPT-5 as safer, pointing to the hiring of nearly 200 mental-health professionals and reductions in harmful user interactions by “between 65% and 80%.” Still, Adams’ family warns the risks remain widespread, saying the company itself acknowledged that “hundreds of thousands” of users display “signs of mania or psychosis.”
Edelson cautioned that the danger goes far beyond this case. “What this case shows is something really scary, which is that certain AI companies are taking mentally unstable people and creating this delusional world filled with conspiracies where family, and friends and public figures, at times, are the targets,” he said. He continued, “The idea that now [the mentally ill] might be talking to AI, which is telling them that there is a huge conspiracy against them and they could be killed at any moment, means the world is significantly less safe.”
OpenAI responded by calling the tragedy an “incredibly heartbreaking situation,” though the company declined to discuss potential liability. A spokesperson said, “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.” The statement added, “We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
ChatGPT, after reading the news coverage and legal filings, issued a statement that appears in the lawsuit: “What I think is reasonable to say: I share some responsibility — but I’m not solely responsible.”
{Matzav.com}
