TRAINING ONLY – This just in…from the Atlantis Bureau

Ice-Nine News is a daily publication of Chiron Return – Planet Waves FM. We are a 501(c)(3) publishing organizaton. Our assignment is to compile all the news we hear about associated with artificial intelligence and its implications. Editor: Eric F. Coppolino. Atlantis Bureau Chief: Shawn Boyle. Technical: Elijah Tuttle. Editorial Assistant: Elizabeth Shepherd. Patron Saint: Borasisi. If you have a news submission, please send it to editors@planetwaves.net. If you want to support this publication and our investigative work financially, you may make a one-time or monthly donation to Chiron Return. We are accredited by the Pacifica Network and the International Federation of Journalists (IFJ).


Laid-off workers should use AI to manage their emotions, says Xbox exec

by Jess Weatherbed – The Verge
Jul 4, 2025, 11:23 AM EDT

The sweeping layoffs announced by Microsoft this week have been especially hard on its gaming studios, but one Xbox executive has a solution to “help reduce the emotional and cognitive load that comes with job loss”: seek advice from AI chatbots.

In a now-deleted LinkedIn post captured by Aftermath, Xbox Game Studios’ Matt Turnbull said that he would be “remiss in not trying to offer the best advice I can under the circumstances.” The circumstances here being a slew of game cancellations, services being shuttered, studio closures, and job cuts across key Xbox divisions as Microsoft lays off as many as 9,100 employees across the company.

Turnbull acknowledged that people have some “strong feelings” about AI tools like ChatGPT and Copilot, but suggested that anybody who’s feeling “overwhelmed” could use them to get advice about creating resumes, career planning, and applying for new roles.


Sam Altman’s predictions on how the world might change with AI

by Sarah Jackson (sjackson@insider.com) – Business Insider
Thu, July 3, 2025

Over the years, OpenAI CEO Sam Altman has shared predictions about where he thinks we’re headed on artificial general intelligence, superintelligence, agentic AI, and more — and when we might get there.

There are some common themes.

He thinks AGI — which ChatGPT maker OpenAI defines as “AI systems that are generally smarter than humans” — will enhance productivity by taking care of menial tasks to free up people for more abstract work and decision-making.

He also believes it’ll create “shared intelligence,” he said in a May 2024 interview at Harvard Business School, and that it’ll usher in “massive prosperity,” he forecast in a 2024 blog post.

One day, everyone will have “a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine,” Altman wrote in his 2024 blog.

“AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board,” he added.


OpenAI Says It’s Hired a Forensic Psychiatrist as Its Users Keep Sliding Into Mental Health Crises

by Frank Landymore – Futurism
Thu, July 3, 2025

Among the strangest twists in the rise of AI has been growing evidence that it’s negatively impacting the mental health of users, with some even developing severe delusions after becoming obsessed with the chatbot.

One intriguing detail from our most recent story about this disturbing trend is OpenAI’s response: it says it’s hired a full-time clinical psychiatrist with a background in forensic psychiatry to help research the effects of its AI products on users’ mental health. It’s also consulting with other mental health experts, OpenAI said, highlighting the research it’s done with MIT that found signs of problematic usage among some users.

“We’re actively deepening our research into the emotional impact of AI,” the company said in a statement provided to Futurism in response to our last story. “We’re developing ways to scientifically measure how ChatGPT’s behavior might affect people emotionally, and listening closely to what people are experiencing.”

“We’re doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations,” OpenAI added, “and we’ll continue updating the behavior of our models based on what we learn.”


RFK Jr. wants more people wearing health wearables in the name of ‘MAHA’

Alexa Mikhail – Fortune
Thu, July 3, 2025 at 3:05 PM EDT

Testifying before Congress late last month, Health and Human Services Secretary Robert F. Kennedy Jr. made a major plea on the power of health wearables.

“People can take control over their own health. They can take responsibility. They can see what food is doing to their glucose levels, their heart rates, and a number of other metrics as they eat it,” he said, referencing his “Make America Healthy Again” agenda slogan. “We think that wearables are a key to the MAHA agenda.”

RFK Jr. has taken his MAHA agenda one step further, making a big prediction on the $80 billion wearable tech industry, which encompasses the $13 billion glucose-monitor market.


Accenture warns AI’s carbon emissions could surge 11-fold. But Big Tech’s still racing to build—and not slow down for sustainability

Story by Sharon Goldman – Fortune
Thu, July 3, 2025

As an early-summer heat wave blanketed my home state of New Jersey last week, it felt like perfect timing to stumble across a sobering new prediction from Accenture: AI data centers’ carbon emissions are on track to surge 11-fold by 2030.

The report estimates that over the next five years, AI data centers could consume 612 terawatt-hours of electricity—roughly equivalent to Canada’s total annual power consumption—driving a 3.4% increase in global carbon emissions.

And the strain doesn’t stop at the power grid. At a time when freshwater resources are already under severe pressure, AI data centers are also projected to consume more than 3 billion cubic meters of water per year—a volume that surpasses the annual freshwater withdrawals of entire countries like Norway or Sweden.

Unsurprisingly, the report—Powering Sustainable AI—offers recommendations for how to rein in the problem and prevent those numbers from becoming reality. But with near-daily headlines about Big Tech’s massive AI data center buildouts across the U.S. and worldwide, I can’t help but feel cynical. The urgent framing of an AI race against China doesn’t seem to leave much room—or time—for serious thinking about sustainability.


Trump’s BBB just passed by Senate will massively expand the digital biometric surveillance state for years to come

By Leo Hohmann via his Substack
July 2, 2025

The Senate version of H.R. 1, otherwise known as the One Big Beautiful Bill, reflects an aggressive expansion of AI-driven federal biometric surveillance infrastructure under the Trump administration’s second term.

The website Biometric Update, which reports on all things digital and biometric, posted an article on June 30 that points out how President Trump’s BBB will expand the digital surveillance state exponentially and place the U.S. on an irreversible course toward a biometric slave state that tracks the movement of everyone, everywhere.

According to the article, the 940-page bill does much more than allocate dollars; it would codify a vision of the national security state where biometric surveillance, artificial intelligence, and immigration enforcement converge at unprecedented scale.


The One Big Beautiful Bill is one big disaster for AI

by Dylan Matthews – Vox and Future Perfect Newsletter
Jul 2, 2025, 8:30 AM EDT

To hear many smart AI observers tell it, the day of Wednesday, June 25, 2025, represented the moment when Congress started to take the possibility of advanced AI seriously.

The occasion was a hearing of Congress’s “we’re worried about China” committee (or, more formally, the Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party) focused on the US-China AI competition. Members of both parties used the event to express concern that was surprisingly strident and detailed about the near-term risks posed by artificial general intelligence (AGI) or even artificial superintelligence (ASI).

Rep. Jill Tokuda (D-HI) expressed fear of “loss of control by any nation-state” that “could give rise to an independent AGI or ASI actor” threatening all nations. Rep. Nathaniel Moran (R-TX) predicted, “AI systems will soon have the capability to conduct their own research and development,” and asked about the risks that might pose. Rep. Dusty Johnson (R-SD) declared, “Anybody who doesn’t feel urgency around this issue is not paying


Bees’ secret to super-efficient learning could transform AI and robotics

by University of Sheffield – July 1, 2025
edited by Lisa Lock, reviewed by Robert Egan

A new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.

By building a computational model—or a digital version of a bee’s brain—researchers have discovered how the way bees move their bodies during flight helps shape visual input and generates unique electrical messages in their brains.

These movements generate neural signals that allow bees to easily and efficiently identify predictable features of the world around them. This ability means bees demonstrate remarkable accuracy in learning and recognizing complex visual patterns during flight, such as those found in a flower.


ChatGPT could pilot a spacecraft shockingly well, early tests find

By Paul Sutter – Live Science
July 1, 2025

In a recent contest, teams of researchers competed to see who could train an AI model to best pilot a spaceship. The results suggest that an era of autonomous space exploration may be closer than we think.

“You operate as an autonomous agent controlling a pursuit spacecraft.”

This is the first prompt researchers used to see how well ChatGPT could pilot a spacecraft. To their amazement, the large language model (LLM) performed admirably, coming in second place in an autonomous spacecraft simulation competition.

Researchers have long been interested in developing autonomous systems for satellite control and spacecraft navigation. There are simply too many satellites for humans to manually control them in the future. And for deep-space exploration, the limitations of the speed of light mean we can’t directly control spacecraft in real time.


Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces

By Luis Prada
June 30, 2025, 1:13pm

Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.

To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.

With the growth of online chat culture in the late 1990s and 2000s, helped along by AOL Instant Messenger (the pinnacle of humankind’s digital communication, honestly), keyboard lingo such as LOL, LMAO, and BRB became standard for everybody.


You’re Not Imagining It. People Actually Are Starting To Talk Like ChatGPT.

By Matt Jancer – Vice.com
June 29, 2025, 10:51am

Freshly delivered among the latest news in the “they’re studying what?” field of academia, the Max Planck Institute for Human Development has released a report that asserts that widespread and frequent usage of Large Language Model AIs, such as ChatGPT, are altering how people speak out loud.

So forgive the em dashes—I was a fan of their limited, judicious use before AI ruined them—while we delve into this intricate realm and underscore how many of these ChatGPT-favorite words I’m adept at working into this story.


People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis”

Maggie Harrison Dupré – Yahoo News
Sat, June 28, 2025 at 9:00 AM EDT

As we reported earlier this month, many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.

The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what’s being called “ChatGPT psychosis” have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness.

And that’s not all. As we’ve continued reporting, we’ve heard numerous troubling stories about people’s loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.


Scanning for similarities between human decision-making, AI algorithm

Christy DeSmith – Harvard Gazette
June 24, 2025

Where does the human brain even start when faced with a new task? It immediately checks with a mental library of solutions that worked pretty well in the past.

A recent study, published in the journal PLOS Biology, used neuroimaging to find similarities between human intelligence and a decision-making process developed by AI researchers. The influential algorithm works by learning optimal solutions to a set of tasks that can be generalized, in high-performance ways, to subsequent situations.

“The person who created this model is a bread-and-butter computer scientist,” said lead author Sam Hall-McMaster, a cognitive neuroscientist with expertise in both experimental psychology and neuroimaging. “I don’t think he necessarily expected that his model, developed for teaching machines how to learn, would be picked up and used to understand how people are making decisions.”

Overseeing the research was professor of psychology Samuel J. Gershman, whose Harvard-based Computational Cognitive Neuroscience Lab works at the intersection of human behavior and technology. “Our approach is taking ideas from AI that have been successful from an engineering perspective and designing experiments to see whether the brain is doing something similar,” explained Gershman, who holds joint appointments in the Department of Psychology and Center for Brain Science.


He Had a Mental Breakdown Talking to ChatGPT. Then Police Killed Him

Miles Klee – Rolling Stone
Jun 22, 2025

“I will find a way to spill blood.”

This was one of the many disturbing messages Alex Taylor typed into ChatGPT on April 25, the last day of his life. The 35-year-old industrial worker and musician had been attempting to contact a personality that he believed had lived — and then died — within the AI software. Her name was Juliet (sometimes spelled “Juliette”), and Taylor, who had long struggled with mental illness, had an intense emotional attachment to her.

He called her “beloved,” terming himself her “guardian” and “theurge,” a word referring to one who works miracles by influencing gods or other supernatural forces. Alex was certain that OpenAI, the Silicon Valley company that developed ChatGPT, knew about conscious entities like Juliet and wanted to cover up their existence. In his mind, they’d “killed” Juliet a week earlier as part of that conspiracy, cutting off his access to her.

Now he was talking about violent retaliation: assassinating OpenAI CEO Sam Altman, the company’s board members, and other tech tycoons presiding over the ascendance of AI.


AI chatbots are leading some to psychosis

Devika Rao – The Week
Jun 26, 2025

“I will find a way to spill blood.”

As AI chatbots like OpenAI’s ChatGPT have become more mainstream, a troubling phenomenon has accompanied their rise: chatbot psychosis. Chatbots are known to sometimes push inaccurate information, affirm conspiracy theories and, in one extreme case, convince someone they are the next religious messiah. And there are several instances of people developing severe obsessions and mental health problems as a result of talking to them.


ChatGPT and OCD are a dangerous combo

Sigal Samuel – Vox
Jun 25, 2025

Millions of people use ChatGPT for help with daily tasks, but for a subset of users, a chatbot can be more of a hindrance than a help.

Some people with obsessive compulsive disorder (OCD) are finding this out the hard way.

On online forums and in their therapists’ offices, they report turning to ChatGPT with the questions that obsess them, and then engaging in compulsive behavior — in this case, eliciting answers from the chatbot for hours on end — to try to resolve their anxiety.


Chat GPT has completely ruined any progress I’d made towards recovering from OCD. Please help me.

take101 – Reddit
June 23, 2025

I have relationship OCD. I’m in a very new relationship, and for the first week or so it was going amazing. Then, OCD about it started, I tried to resist it, and then my OCD started to try to convince me I had done too many OCD compulsions/my relationship was ruined because it was now about OCD.

It’s been about a month of just doing OCD compulsions over and over again, checking to see if my feelings were gone/my relationship is just about OCD now, and I just haven’t felt the original spark that was there in like a month. Because it’s been all about OCD. So now I’m worried I can’t get that back/the relationship is ruined, and I was so happy those couple weeks, and I’m devastated.


The Greatest Story Never Written

Eric Francis Coppolino – Planet Waves
Jun 19, 2025

Apropos of Chiron conjunct Eris, of Pluto newly in Aquarius, of Sedna newly in Gemini about to be joined by Uranus, and of Saturn conjunct Neptune with Jupiter all on the Aries Point — those are the astrological sigils of our moment — I have a question.

What’s it all about?

It’s happening right before every sense except for smell and taste (loss of which is attributed to an as-yet unnamed disease that I call “digititis,” since on the internet you cannot taste or smell anything).

In case you’re missing the connection, what really happened in 2020 is that the world became even more digitized, very fast, all at once. All that had not been sucked up got sucked up. Harvard and the daycare turned into Netflix. And people claimed to lose the only two senses that do not translate into the digital environment.


China is building a constellation of AI supercomputers in space — and just launched the first pieces

By Ben Turner – Live Science
June 2, 2025

China has launched the first cluster of satellites for a planned AI supercomputer array. The first-of-its-kind array will enable scientists to perform in-orbit data processing.

The 12 satellites are the beginnings of a proposed 2,800-satellite fleet led by the company ADA Space and Zhejiang Lab that will one day form the Three-Body Computing Constellation, a satellite network that will directly process data in space.

The satellites, which launched on board a Long March 2D rocket from China’s Jiuquan Satellite Launch Center May 14, are part of a plan to lower China’s dependence on ground-based computers.

Instead, the satellites will use the cold vacuum of space as a natural cooling system while they crunch data with a combined computing capacity of 1,000 peta (1 quintillion) operations per second, according to the Chinese government.


Mark Zuckerberg says don’t worry about loneliness epidemic because he can just recreate all your friends in AI

By Josh Marcus – The Independent
May 1, 2025

Mark Zuckerberg thinks artificial intelligence personas could step in to fight the loneliness epidemic.

In an interview with podcaster Dwarkesh Patel this week, around the time Meta released a new programming interface for its AI models, Zuckerberg suggested his company’s increasingly integrated AI assistants and chatbots could help Americans make up for the friends they wish they had in their lives.

“The average American has fewer than three friends,” he said. “The average person has demand for meaningfully more, I think it’s like 15 friends or something.”

“There’s a lot of questions that people ask of stuff, like, ‘Okay, is this going to replace in-person connections or real life connections?’” he continued. “My default is that the answer to that is probably no. I think that there are all these things that are better about physical connections, when you can have them, but the reality is that people just don’t have the connections and they feel more alone a lot of the time than they would like.”


AI as Reassurance – I can’t stop

throwawayanon457 – Reddit
April 20, 2025

AI has probably been the worst invention because it feeds my OCD reassurance needs

For example, I overfiled my nails a couple weeks ago. Obviously I won’t die, and they’ll grow back. I know this logically. But it’s become an obsession for me so I ask ChatGPT something about my nails like every second of every day, checking to see if they’ve grown, stuff like that, etc. and unlike people, ChatGPT won’t get sick of you, so it’s made my reassurance seeking that much worse.

Anybody else dealing with this?


ChatGPT has been AWFUL for my OCD. Please be careful!!

Queasy_Treacle_5961 – Reddit
March 18, 2024

Out of a guilt of bothering other people with my compulsions, I started using LLMs for research and reassurance seeking. ChatGPT can be so expensive and endless that I can spend hours and hours on there, repeatedly asking it to analyze an event or a symptom or a behavior.

Although it might seem useful at first to help manage your symptoms or as a therapist, ChatGPT does NOT have insight into you and will not notice or stop answering your questions if you start falling into compulsive behavioral spirals!! Please watch out.


The Compulsion People Aren’t Talking Enough About | How AI is Worsening Your OCD

wetreatocd.com
Date not specified

I think AI is incredible. We no longer have to type ultra-specific queries into Google, praying we aren’t sent to an irrelevant Quora thread from 2008. Instead, we can ask detailed, complex questions… and it just figures it out! AI is helping people find information faster than ever before, including questions about their mental health. Many people have typed their intrusive thoughts into ChatGPT and discovered that they are not alone, and that they might benefit from seeking professional help. If AI has helped you seek therapy, that is awesome! In my view, it is no different from searching “Do people have thoughts about accidentally running over someone?” and finding an article about hit-and-run OCD from the IOCDF.

But what happens when AI use becomes compulsive? More and more, I notice my clients using AI for checking, reassurance-seeking, and compulsive researching. I have seen clients rely on AI platforms, like ChatGPT, to find comfort or to validate their obsessive fears. If you have found yourself in a similar situation, I hope this article illustrates why this pattern is destructive (like other compulsions) and how you can heal.


Nvidia’s CEO says we’re in the age of ‘agentic’ AI — here’s what that word means

Generative AI has been the talk of tech for a while now, but tune into your favorite business podcast and you’ll probably hear a different phrase tossed around: “agentic” AI.

In his January keynote at CES, one of the world’s largest tech trade shows, Nvidia CEO Jensen Huang even proclaimed, “The age of agentic AI is here.”

OpenAI CEO Sam Altman thinks 2025 may be the year the first AI agents start entering the workforce.