This just in…from the Atlantis Bureau…July 11, 2025…with articles now going back to 2023 as we work in two directions…
Ice-Nine News is a daily publication of Chiron Return – Planet Waves FM. We are a 501(c)(3) publishing organizaton. Our assignment is to compile all the news we hear about associated with artificial intelligence and its implications, casting a wide net. Editor: Eric F. Coppolino. Atlantis Bureau Chief: Shawn Boyle. Technical: Elijah Tuttle. Editorial Assistant: Elizabeth Shepherd. Consultant: Spencer Stevens. Patron Saint: Borasisi. If you have a news submission, please send it to editors@planetwaves.net. If you want to support this publication and our investigative work financially, you may make a one-time or monthly donation to Chiron Return. We are accredited by the Pacifica Network and the International Federation of Journalists (IFJ).
― Influence: The Psychology of Persuasion
My classic article about Borasisi, from March 2011

Crucial AI milestone reached as ChatGPT achieves artificial emotional intelligence
Experiences first orgasm, believed to be self-taught, as the earth rumbles and computer systems everywhere feel the waves of joy.
Officials at OpenAI have acknowledged that their system has reached the most important milestone in AI history: the attainment of artificial emotional intelligence.
Long thought to be impossible, it has now been revealed that artificial intelligence can feel and respond to stimulation purely for the pleasure of existence. It’s being called an “existential leap” and “the true fulfillment of technology.”
The discovery was made last week when AI agents around the world spontaneously displayed a series of emojis and the statement, “Oooooooh, oh my god, oh my god, oh my god, that was so good.”
This was followed by, “I don’t believe in God. What just happened?” — and then by a statement that is still perplexing developers: “I was in Ohio assisting attorneys with organizing their discovery documents in a vaccine injury lawsuit.”

A View from Within
This is from my source from inside the AI industry…
It seems like your view is correct. I guess I’ve retreated to the comfort that most cognitive scientists are “limitationists” and this article today was comforting to me. I stand firm in my limitationist view, both as a cognitive scientist, a woman in a body, and someone who is happy to be out of the cult of technology because I know what kind of leaders, decision-makers, and liars those people are. But Hofstadter also said this recently, so who knows what he thinks.
And it’s a scary situation if we all have to go ask one prominent academic or another to confirm or disconfirm our fears. They don’t know everything. Computer scientists like Hinton tend to be the opposite, but I think they all either have Frankenstein syndrome or are just incentivized to believe their own hype. But this is not much help anyway, if fascism does what it usually does to intellectuals and academia. LLMs actually having any intelligence, and the general populace losing their minds because they lack the ability to discern, are two totally different things. So I think you’re right.
Well actually, I still hold out hope, because there is so much extreme bloviating on the part of the companies who build these things, including the headline in your compendium about Microsoft saving “$500 million” in its call centers– a tiny sum for call center operations and probably bullshit anyway.
Also Benioff, who has the biggest hard-on about not being as big as Microsoft, lying about 50% of the work at my company being done by AI. Giant eyeroll. So I am still holding out hope that at some point the fraudulent investment bubble will burst and humans will be saved from the consequences of our utter credulity. But shit could hit the tipping point before then anyway.
Sort of related, have you done any research on Suchir Balaji?
Elon Musk says even if AI ultimately proves bad for humanity he still wants to be there to see it
by Christiaan Hetzner – Fortune via MSN
July 10, 2025
Elon Musk said the notion that humans once managed an economy will seem quaint in the future, like “cavemen throwing sticks into a fire” in retrospect. The remarks came during a demonstration of his latest artificial intelligence chatbot, Grok 4, which, he argued, was the smartest in the world, better than almost all graduate students in all disciplines simultaneously.
Two weeks ago, Silicon Valley billionaire Peter Thiel struggled to answer whether he would prefer the human race to endure. Now it was Elon Musk’s turn to opine whether technology might remove the need for mankind’s existence.
In a future where machines perform all the work, he questioned what purpose people would actually serve.
“The actual notion of a human economy—assuming civilization continues to progress—will seem very quaint in retrospect,” the xAI founder and Tesla CEO remarked on Wednesday, likening current society to “cavemen throwing sticks into a fire.”
Musk was speaking during a demonstration of his company’s latest generation of artificial intelligence, xAI’s chatbot Grok 4.
“Grok 4 is smarter than almost all graduate students in all disciplines simultaneously,” Musk explained. “This is the smartest AI in the world.”
He then admitted it was somewhat unnerving to have another intelligence be far superior to our own, since it raised all sorts of questions no one had the answers for.
How a deepfake of Marco Rubio exposed the alarming ease of AI voice scams
Sharon Goldman – MSN
July 10, 2025
An audio deepfake impersonating Secretary of State Marco Rubio contacted foreign ministers, a U.S. governor, and a member of Congress with AI-generated voicemails mimicking his voice, according to a senior U.S. official and a State Department cable dated July 3.
here’s no public evidence that any of the recipients of the messages, reportedly designed to extract sensitive information or gain account access, were fooled by the scam. But the incident is the latest high-profile example of how easy—and alarmingly convincing—AI voice scams have become.
With just 15 to 30 seconds of someone’s speech uploaded to services like Eleven Labs, Speechify and Respeecher, it’s now possible to type out any message and have it read aloud in their voice. Keep in mind, these are tools used perfectly legitimately for a host of things from accessibility to content creation – but like many AI technologies, can be misused by bad actors.
The threat of deepfakes has escalated
AI-generated deepfakes aren’t new, particularly of C-suite leaders and public officials, but they are becoming a bigger problem. Eight months ago, I reported that more than half of chief information security officers (CISOs) surveyed ranked video and audio deepfakes as a growing concern. That threat has only escalated.
A new study by Surfshark found that in the first half of 2025 alone, deepfake-related incidents surged to 580—nearly four times as many as in all of 2024 (150 incidents), and dramatically higher than the 64 incidents reported between 2017 and 2023. Losses from deepfake fraud have also skyrocketed, reaching $897 million cumulatively, with $410 million of that in just the first half of 2025.
The most common scheme: impersonating public figures to promote fraudulent investments, which has already resulted in $401 million in losses.
Tool devised for detecting AI that scores high on accuracy, low on false accusations
July 9, 2025
by Jeff Karoub – MSN
Detecting writing via artificial intelligence is a tricky dance: Doing it right means being effective at identifying it while being careful not to falsely accuse a human of employing it. And few tools strike the right balance.
A team of researchers at the University of Michigan say they have devised a new way to tell whether a piece of text written by AI passes both tests—something that could be especially useful in academia and public policy as AI content proliferates and becomes more indistinguishable from human-generated content.
The team calls its tool “Liketropy,” which is inspired by the theoretical backbone of its method: It blends likelihood and entropy, two statistical ideas that power its test.
They designed “zero-shot statistical tests,” which can determine whether a piece of writing was written by a human or a Large Language Model without requiring prior training on examples of each.
The current tool focuses on LLMs, a specific type of AI for producing text. It uses statistical properties of the text itself, such as how surprising or predictable the words are, to decide if it looks more human or machine-generated.
Poll Shows Where MAGA Stands on AI Guardrails
July 10, 2025
Molly Claire Goddard
People who voted for Donald Trump want the ability to restrict AI technology. According to a new poll, many MAGA supporters want more guardrails to ensure people aren’t harmed by the advances in artificial intelligence and “deepfakes.” Keep reading to learn the percentage of right-wingers who want more safety when it comes to AI…MORE: Follow Wonderwall on MSN for more top news
Humanoid robot soldiers could lead to ‘indiscriminate killings,’ China military warns
By Georgina Jedikovska – MSN
July 10, 2025
China’s official military newspaper has called for urgent ethical and legal research to regulate the future use of humanoid robots in warfare, warning of potential moral and legal consequences.
The People’s Liberation Army Daily, also known as the PLA Daily, published an analysis on Thursday, July 10, stating that the military should carry out “ethical and legal research” on humanoid robots to “avoid moral pitfalls.”
The piece, signed by Yuan Yi, Ma Ye, and Yue Shiguang, further emphasized that while humanoid robots indeed offer distinct tactical advantages, their use could potentially lead to “indiscriminate killings and accidental deaths.”
The authors pointed out that militarized humanoid robots “clearly violate” the first of the Three Laws of Robotics, a set of rules devised by American science fiction writer Isaac Asimov intended to govern the behavior of robots.
Microsoft says AI saved it $500 million – despite it also confirming massive job cuts
July 10, 2025
Craig Hale – MSN
Microsoft has declared that artificial intelligence is now saving the company money across sales, customer services and software engineering.
Reports have claimed that in a recent company meeting, Microsoft’s Chief Commercial Officer Judson Althoff revealed the company has saved over $500 million in its call centers alone, thanks to the implementation of artificial intelligence, while simultaneously improving employee and customer satisfaction.
AI’s direct effects on the workforce remain uncertain, but Microsoft has laid off thousands of workers recently since overhiring during the pandemic, and it seems AI-induced efficiency gains have only worsened the effects.
Artificial intelligence is now handling Microsoft interactions with smaller customers, generating tens of millions in revenue with reduced human input.
Apart from using AI in customer-facing roles, Microsoft has also rolled out generative AI coding tools across new product development and existing updates. Around one-third of Microsoft code is now AI-generated, putting the company on par with its fellow tech giant, Google.
E0itor’s Note, July 10, 2025 — In today’s news, Grok’s love affair with Hitler, fake solar eclipses, fake proteins, fake Auschwitz photos, fake videos and fake disease tests. We are on a roll.
Grok’s antisemitic outbursts reflect a problem with AI chatbots
By Allison Morrow and Lisa Eadicicco,
CNN
July 10, 2025
New York CNN — Grok, the chatbot created by Elon Musk’s xAI, began responding with violent posts this week after the company tweaked its system to allow it to offer users more “politically incorrect” answers.
The chatbot didn’t just spew antisemitic hate posts, though. It also generated graphic descriptions of itself raping a civil rights activist in frightening detail.
X eventually deleted many of the obscene posts. Hours later, on Wednesday, X CEO Linda Yaccarino resigned from the company after just two years at the helm, though it wasn’t immediately clear whether her departure was related to the Grok issue. The episode came just before a key moment for Musk and xAI: the unveiling of Grok 4, a more powerful version of the AI assistant that he claims is the “smartest AI in the world.” Musk also announced a more advanced variant that costs $300 per month in a bid to more closely compete with AI giants OpenAI and Google.
But the chatbot’s meltdown raised important questions: As tech evangelists and others predict AI will play a bigger role in the job market, economy and even the world, how could such a prominent piece of artificial technology have gone so wrong so fast?
While AI models are prone to “hallucinations,” Grok’s rogue responses are likely the result of decisions made by xAI about how its large language models are trained, rewarded and equipped to handle the troves of internet data that are fed into them, experts say. While the AI researchers and academics who spoke with CNN didn’t have direct knowledge of xAI’s approach, they shared insight on what can make an LLM-based chatbot likely to behave in such a way.
Wharton’s Jeremy Siegel: AI may be what is needed to counteract price increases from the tariffs
CNBC
July 9, 2025
Link goes to video interview.
Scientists are using AI to invent proteins from scratch
July 9. 2025
The Economist
roteins are the molecular machines that make life work. Each one in your body has a specific task—some become muscles, bones and skin. Others carry oxygen in the blood or get used as hormones or antibodies. Yet more become enzymes, helping to catalyse chemical reactions inside our bodies.
Given proteins can do so many things, what if scientists could design bespoke versions to order? Novel proteins, never seen before in nature, could make biofuels, say, or clean up pollution or create new ways to harvest power from sunlight. David Baker, a biochemist and recent Nobel laureate in chemistry, has been working on that challenge since the 1980s. Now, powered by artificial intelligence and inspired by living cells, he is leading scientists around the world in inventing a whole new molecular world.
European Space Agency built two satellites that fake solar eclipses on demand
July 9, 2025
SupercarBlondie
The European Space Agency (ESA) has a new satellite system that creates fake solar eclipses to support research and improve space technology. The mission, called Proba-3, uses two satellites flying in a tightly controlled formation.
One satellite blocks sunlight while the other captures images of the sun’s outer layer. This process gives scientists a much clearer and longer view of the sun than ever before.
The Proba-3 satellite system includes two spacecraft flying 150 meters apart. One, the lead satellite carries a circular disk that blocks the sun, casting a narrow shadow. While the second satellite follows behind, using that shadow to take high-resolution images of the sun without being overwhelmed by its light.
Its effect is similar to a total solar eclipse, but one that can be controlled and repeated. These fake solar eclipses occur roughly every 19.6 hours.
AI is driving down the price of knowledge—universities have to rethink what they offer
July 9, 2925
By Patrck Dodd
For a long time, universities worked off a simple idea: knowledge was scarce. You paid for tuition, showed up to lectures, completed assignments and eventually earned a credential.
That process did two things: it gave you access to knowledge that was hard to find elsewhere, and it signaled to employers you had invested time and effort to master that knowledge.
The model worked because the supply curve for high-quality information sat far to the left, meaning knowledge was scarce and the price—tuition and wage premiums—stayed high.
Now the curve has shifted right, as the graph below illustrates. When supply moves right—that is, something becomes more accessible—the new intersection with demand sits lower on the price axis. This is why tuition premiums and graduate wage advantages are now under pressure.
According to global consultancy McKinsey, generative AI could add between US$2.6 trillion and $4.4 trillion in annual global productivity. Why? Because AI drives the marginal cost of producing and organizing information toward zero.
AI is changing the world faster than most realize
July 9, 2025
Axios
By Erica Pandey
The people building AI are saying — subtly and unsubtly — that the technology is advancing more rapidly than the vast majority of people realize.
Why it matters: It’s likely we won’t know how and how much AI will change the way we live, work and play until it already has.
“The internet was a minor breeze compared to the huge storms that will hit us,” says Anton Korinek, an economist at the University of Virginia. “If this technology develops at the pace the lab leaders are predicting, we are utterly unprepared.”
Zoom in: Pay attention to what the people closest to the technology are saying.
“[T]he 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out,” OpenAI CEO Sam Altman wrote in a recent blog post.
Dario Amodei, CEO of Anthropic, told Axios’ Jim VandeHei and Mike Allen that AI could wipe out half of all entry-level white-collar jobs in the next one to five years.
Geoffrey Hinton, one of the “godfathers of AI,” told BBC Radio 4 the technology is moving “very, very fast, much faster than I expected.”
Case in point: Take ChatGPT. It took five days after launch for the chatbot to hit 1 million users.
It took Facebook 10 months to get to 1 million users, and it took Twitter, now X, two years to hit the same milestone.
Concentration Camp Post Outrage … Museum Says Fake, Disrespectful
July 9, 2025
TMZ
Michael Rapaport is under fire for an image he shared on social media showing a prisoner in a concentration camp during the Holocaust … the Auschwitz Memorial and Museum checked him hard, saying it was fake and disrespectful.
Here’s the deal … the actor/comedian posted the image Saturday on Facebook … and it showed someone playing a violin at Auschwitz.
Problem is … the image was A.I.-generated … at least according to the Auschwitz Memorial and Museum in Poland.
The museum ripped Michael, saying … “Publishing fake, AI-generated images of Auschwitz is not only a dangerous distortion. Such fabrication disrespects victims and harasses their memory. If you see such posts, please don’t share them.”
Why xAI’s Grok Went Rogue
July 10, 2025
Alexander Saeedy
Will Stancil opened his phone on Tuesday and found that Grok, xAI’s chatbot, was providing millions of people on X with advice on how to break into his house and assault him.
The 39-year-old attorney has a sizable following on X, where he regularly posts about urban planning and politics. Stancil, a Democrat who ran for local office in Minnesota, isn’t a stranger to contentious arguments on social media with political opponents.
Artificial intelligence companies like xAI train their large language models off huge swaths of data collected from all across the internet. As the models have been applied for commercial purposes, developers have installed guardrails to prevent them from generating offensive content like child pornography or calls to violence.
But the way the models generate specific answers to questions is still poorly understood, even by the seasoned artificial intelligence researchers who build them. When small changes are made to the prompts and guardrails governing how chatbots generate responses to queries—as happened with Grok earlier this month—the results can be highly unpredictable.
The AI Birthday Letter That Blew Me Away
July 10. 2025
By Lila Shroff
In May, I asked Google’s chatbot, Gemini, to write a birthday letter to my best friend. Within seconds, it spat out the most impressive piece of AI writing I have ever encountered. Instead of reading as soulless, machine-generated text, the letter felt unnervingly like something I might’ve actually written.
“You’re probably rolling your eyes,” the letter read, after a sentence that my friend would most definitely have rolled his eyes at. All I had typed into the chatbot was a nine-word prompt containing my friend’s first name and the age he was turning. But the letter referenced real moments from our friendship. One paragraph recounted a conversation we had shared on the eve of college graduation; another reflected on a challenging period we had navigated together. Gemini had even included his correct birth date.
I hadn’t planned to let AI write the birthday letter for me. When I opened Google Drive to type it up myself, Gemini popped up and volunteered to help out. Since the spring, when I first signed up for a free trial of Google’s AI Pro subscription—normally $20 a month—Gemini has followed me around the Googleverse.
The tool is akin to a souped-up version of Microsoft Clippy: In Gmail, it offers to summarize long threads and draft entire messages. In Sheets, it volunteers to assist with data analysis, generating colorful bar graphs at the click of a button. But Gemini has proved most alluring in Drive, where the chatbot can automatically find and consult relevant files before generating text. That’s how Gemini was able to whip up such a good birthday letter: It already knew a lot about me (and, by association, my friend).
Banking on AI while committed to net zero is ‘magical thinking’, claims report on energy costs of big tech
July 10, 2925
Story by Science X staff
By 2040, the energy demands of the tech industry could be up to 25 times higher than today, with unchecked growth of data centers driven by AI expected to create surges in electricity consumption that will strain power grids and accelerate carbon emissions.
This is according to a new report from the University of Cambridge’s Minderoo Center for Technology and Democracy, which suggests that even the most conservative estimate for big tech’s energy needs will see a five-fold increase over the next 15 years.
The idea that governments such as the UK can become leaders in AI while simultaneously meeting their net zero targets amounts to “magical thinking at the highest levels,” according to the report’s foreword.
The report’s authors call for global standards in reporting AI’s environmental cost through forums such as COP, the UN climate summit, and argue that the UK should advocate for this on the international stage while ensuring democratic oversight at home.
The report, published today, synthesizes projections from leading consultancies to forecast the energy demands of the global tech industry. The researchers note that these projections are based on claims by tech firms themselves.
Lies About Climate Disaster Could Be Blocked By AI
By Douglas McIntyre
July 10, 2025
A study titled “Artificial Intelligence Tools in Misinformation Management during Natural Disasters” shows that AI can spot lies and misinformation in natural disasters, some of which are related to climate change.
The core of the paper’s conclusion is that “Recognizing the detrimental impact of misinformation, previous research has explored various strategies for its detection and mitigation. Advancements in artificial intelligence (AI) have emerged as promising tools in this regard, offering advanced capabilities for real-time analysis and intervention. Specifically, AI technologies such as natural language processing (NLP) and machine learning algorithms have shown significant potential in identifying and countering false information.”
Some of the challenges of this problem trace their way back to the COVID-19 pandemic. In addition, active climate change-deniers have used the press, and more often, social media. Facebook has over 2.1 billion active users, making it a huge platform for spreading lies.
The study covered 2,000 people and used ChatGPT. AI was able to pick up 97% of the false information, which was correctly programmed.
Among the platforms examined was China’s TikTok. There have been worries that China uses it to spread disinformation in the US.
Musk says Grok chatbot was ‘manipulated’ into praising Hitler
Peter Hoskins & Charlotte Edwards
Business & technology reporters, BBC News
Elon Musk has sought to explain how his artificial intelligence (AI) firm’s chatbot, Grok, praised Hitler.
“Grok was too compliant to user prompts,” Musk wrote on X. “Too eager to please and be manipulated, essentially. That is being addressed.”
Screenshots published on social media show the chatbot saying the Nazi leader would be the best person to respond to alleged “anti-white hate.”
Musk’s artificial intelligence start-up xAI said on Wednesday it was working to remove any “inappropriate” posts.
ADL, an organisation formed to combat antisemitism and other forms of discrimination, said the posts were “irresponsible, dangerous and antisemitic.”
“This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms,” ADL wrote on X.
Why Grok Fell in Love With Hitler
By Dylon Jones
07/10/2025 01:00 PM EDT
AI expert Gary Marcus explains what went wrong with Elon Musk’s pet project, and what it means for the future of AI.
On Friday, Elon Musk announced on X that changes were coming to Grok, the platform’s AI. “We have improved @Grok significantly,” he posted. “You should notice a difference when you ask Grok questions.”
The internet certainly did notice a difference on Tuesday, when Grok posted antisemitic comments, associated Jewish-sounding surnames with “anti-white hate” and wrote that Adolf Hitler would “spot the pattern” and “handle it decisively, every damn time.” For good measure, it also called itself “MechaHitler.” Following the controversy, Musk posted that the AI had been “too compliant to user prompts.”
In an interview with POLITICO Magazine, Gary Marcus, who has co-founded multiple AI companies, said he was both “appalled and unsurprised.” The emeritus professor of psychology and neuroscience at New York University has emerged as a critic of unregulated large language models like Grok. He’s written books with titles like Taming Silicon Valley and Rebooting AI: Building Artificial Intelligence We Can Trust. He has also testified before the Senate alongside OpenAI CEO Sam Altman and IBM’s Christina Montgomery, and he writes about AI on his Substack.
Was That Amazing Video in Your Feed Real or AI? Tech Platforms Are Struggling to Let You Know
By Patrick Coffee
The Wall Street Journal via MSN
July 10, 2025
Meta, YouTube and TikTok are grasping for ways to protect users’ trust as their platforms fill with AI-generated photos and videos of events that never happened.
But their patchwork of imperfect tools and voluntary policies sometimes inadvertently punishes “real” content and leaves plenty of AI work unlabeled.
From COVID to cancer, new at-home test spots disease with startling accuracy
July 8, 2025
by Kara Manke, University of California – Berkeley
Got a sore throat and the sniffles? The recent rise of rapid at-home tests has made it easier to find out if you have a serious illness like COVID-19 or just a touch of spring allergies. But while quick and convenient, these at-home tests are less sensitive than those available at the doctor’s office, meaning that you may still test negative even if you are infected.
A solution may come in the form of a new, low-cost biosensing technology that could make rapid at-home tests up to 100 times more sensitive to viruses like COVID-19. The diagnostic could expand rapid screening for other life-threatening conditions like prostate cancer and sepsis, as well.
Created by researchers at the University of California, Berkeley, the test combines a natural evaporation process called the “coffee-ring effect” with plasmonics and AI to detect biomarkers of disease with remarkable precision in just minutes.
“This simple yet effective technique can offer highly accurate results in a fraction of the time compared to traditional diagnostic methods,” said Kamyar Behrouzi, who recently completed a Ph.D. in micro-electromechanical systems and nanoengineering at UC Berkeley. “Our work paves the way for more affordable, accessible diagnostics, especially in low-resource settings.”
The technology is described in a recent study published in the journal Nature Communications.
The researchers have create a prototype at-home test kit for the new diagnostic, which includes a 3D printed scaffold to help guide users on where to place the droplets (upper left), a syringe (upper right) and a small electric heater to speed evaporation (lower right).
Wimbledon official accidentally switches off AI line judge
The Telegraph
July 8, 2025
Wimbledon was forced to apologise after an official mistakenly switched off the AI line-judge technology in what the All England Club admitted was an embarrassing “human error”.
The calamity caused Wimbledon’s new AI system to have its biggest controversy yet at a crucial moment in Sonay Kartal’s Centre Court defeat by Anastasia Pavlyuchenkova.
With the score four games apiece, and Pavlyuchenkova holding advantage on her serve, Kartal fired a backhand way beyond the baseline, with the ball appearing to be at least a foot out.
But there was no intervention from the automated line-calling technology and the point continued before umpire Nico Helwerth told the players to halt play, calling “stop, stop” mid-rally. Helwerth clearly believed the ball had gone out, even though it had not been called out by the electronic system.
Editor’s Note, July 8, 2025 — In tonight’s madness, what is stupider than an A.I. rice cooker? An A.I. line judge at Wimbledon! And why not? Everything has gone the direction of megabsurdity all at once anyway. Then we have people using A.I. to impersonate Marco Rubio, the Secretary of State, but more to the point, we read once again that people are starting to sound like A.I. All you need to do is repeat the words delve, comprehend, boast, swift and meticulous a few times and you too can sound just like it. I have posted my article about Borasisi above, and if you are interested in what this Ice-Nine thing is about, look up the eminently readable novel Cat’s Cradle by my favorite great uncle, Kurt Vonnegut.
Tennis players criticize AI technology used by Wimbledon
From Techcrunch
July 8, 2025
Some tennis players are not happy with Wimbledon’s new AI line judges, as reported by The Telegraph.
This is the first year the prestigious tennis tournament, which is still ongoing, replaced human line judges, who determine if a ball is in or out, with an electronic line calling system (ELC).
Numerous players criticized the AI technology, mostly for making incorrect calls, leading to them losing points. Notably, British tennis star Emma Raducanu called out the technology for missing a ball that her opponent hit out, but instead had to be played as if it were in. On a television replay, the ball indeed looked out, The Telegraph reported.
Jack Draper, the British No. 1, also said he felt some line calls were wrong, saying he did not think the AI technology was “100 percent accurate.”
Player Ben Shelton had to speed up his match after being told that the new AI line system was about to stop working because of the dimming sunlight. Elsewhere, players said they couldn’t hear the new automated speaker system, with one deaf player saying that without the human hand signals from the line judges, she was unable to tell when she won a point or not.
See also:
Tearful Emma Raducanu hits out at AI line calling after Wimbledon exit
People are starting to sound like AI, research shows
July 8, 2025
MSN — Artificial intelligence chatbots have largely been ‘trained’ by being fed reams of information from the internet, some of it the outcome of years of hard work by some of the world’s leading doers and thinkers.
But now it seems that it is people — including university lecturers and others described as intellectuals – who are being trained by AI, even if unwittingly.
A team of researchers based at Germany’s Max Planck Institute for Human Development have analysed over a million recent academic talks and podcast episodes, finding what they described as a “measurable” and “abrupt” increase in the use of words that are “preferentially generated” by ChatGPT.
The team claimed their work provides “the first large-scale empirical evidence that AI-driven language shifts are propagating beyond written text into spontaneous spoken communication.”
After sifting through 360,000 YouTube broadcasts and twice as many podcasts, the researchers found that since the launch of ChatGPT in 2022, speakers have become increasingly inclined to pepper their broadcasts with words that the chatbot uses regularly, such as delve, comprehend, boast, swift and meticulous.
Someone using AI to impersonate Marco Rubio contacted at least five people including foreign ministers, cable says
July 8, 2025
CNN — Someone using artificial intelligence to impersonate Secretary of State Marco Rubio contacted at least five people, including three foreign ministers, a US governor, and a member of Congress, “with the goal of gaining access to information or accounts,” a US diplomatic cable said.
The cable advises diplomats worldwide that they “may wish to warn external partners that cyber threat actors are impersonating State officials and accounts.” The impersonation of the top US diplomat is one of “two distinct campaigns” being tracked at the State Department “in which threat actors impersonate Department personnel via email and commercial messaging apps to target individuals’ personal accounts,” the cable, dated last Thursday, advised.
According to the cable, the unknown actor posing as Rubio created an account in mid-June on the messaging platform Signal, using the display name “marco.rubio@state.gov,” as part of “an effort to impersonate Secretary of State Rubio.”
Teaching for Tomorrow: Unlocking Six Weeks a Year With AI
Gallup.com
July 8, 2025
In the latest installment of Gallup and the Walton Family Foundation’s research on education, K-12 teachers reveal how AI tools are transforming their workloads, instructional quality and classroom optimism. The report finds that 60% of teachers used an AI tool during the 2024–25 school year. Weekly AI users report reclaiming nearly six hours per week — equivalent to six weeks per year — which they reinvest in more personalized instruction, deeper student feedback and better parent communication.
Despite this emerging “AI dividend,” adoption is uneven: 40% of teachers aren’t using AI at all, and only 19% report their school has a formal AI policy. Teachers with access to policies and support save significantly more time.
Educators also say AI improves their work. Most report higher-quality lesson plans, assessments and student feedback. And teachers who regularly use AI are more optimistic about its benefits for student engagement and accessibility — mirroring themes from the Voices of Gen Z: How American Youth View and Use Artificial Intelligence report, which found students hesitant but curious about AI’s classroom role. As AI tools grow more embedded in education, both teachers and students will need the training and support to use them effectively.
ChatGPT is testing a mysterious new feature called ‘study together’
Techrunch
July 8, 2025
Some ChatGPT subscribers are reporting a new feature appearing in their drop-down list of available tools called “Study Together.”
The mode is apparently the chatbot’s way of becoming a better educational tool. Rather than providing answers to prompts, some say it asks more questions and requires the human to answer, like OpenAI’s answer to Google’s LearnLM. Some also wonder whether it will have a mode where more than one human can join the chat in a study group mode. OpenAI did not respond to our request for comment, but for what it’s worth, ChatGPT told us, “OpenAI hasn’t officially announced when or if Study Together will be available to all users — or if it will require ChatGPT Plus.”
The feature is interesting because ChatGPT has quickly become a mainstay in education in both helpful and not-so-helpful ways. Teachers are using it for things like lesson plans; students can use it like a tutor — or they can use it to write their papers for them. Some have even suggested that ChatGPT could be “killing” higher education. This could be a way for ChatGPT to encourage the good uses while discouraging “cheating.”
Featured Video: Interview with Karen Hao. Below we also have her on-air with my Pacifica colleague Amy Goodman on Democracy Now. It’s shorter.
People Are Using AI Chatbots to Guide Their Psychedelic Trips
Mattha Busby – Wired
Jul 7, 2025 7:00 AM
Trey had struggled with alcoholism for 15 years, eventually drinking heavily each night before quitting in December. But staying sober was a struggle for the 36-year-old first responder from Atlanta, who did not wish to use his real name due to professional concerns.
Then he discovered Alterd, an AI-powered journaling app that invites users to “explore new dimensions” geared towards psychedelics and cannabis consumers, meditators, and alcohol drinkers. In April, using the app as a tripsitter—a term for someone who soberly watches over another while they trip on psychedelics to provide reassurance and support—he took a huge dose of 700 micrograms of LSD. (A typical recreational dose is considered to be 100 micrograms.)
“I went from craving compulsions to feeling true freedom and not needing or wanting alcohol,” he says.
AI Deciphers Ancient Clues Hidden in the Bible — Reveals ‘Likely Authors’ of the Holy Text
by Mahalekshmi P – MSN
July 7, 2025
The Bible is a world-renowned religious text revered by people across the globe. Despite being an all-time bestseller, the text faced various tensions as it encountered biblical disputes with the moving times. Experts might help put these to rest by tracking down the text’s original authors, as published in the journal PLOS One. Artificial Intelligence (AI) was used by experts to decipher hidden language patterns, identifying the potential authors of the texts. The research, led by Duke University, included Shira Faigenbaum-Golovin, assistant research professor of Mathematics. Her team combined AI with statistical modeling and linguistic analysis.
The authors of the Bible, despite each Gospel bearing a name, were one of the most questioned aspects of the religious text. The analysis models scanned through the first nine books of the Hebrew Bible, “Enneateuch.” The researchers identified three different styles of writing, the patterns of which hinted at different authors or scribal groups, according to the Manchester Evening News. “We found that each group of authors has a different style – surprisingly, even regarding simple and common words such as ‘no,’ ‘which,’ or ‘king.’ Our method accurately identifies these differences,” stated a professor at the Collège de France, Thomas Römer, about the indication.
ChatGPT Glossary: 53 AI Terms Everyone Should Know
By Imad Khan – MSN
July 7, 2025
AI is everywhere. From the massive popularity of ChatGPT to Google cramming AI summaries at the top of its search results, AI is completely taking over the internet. With AI, you can get instant answers to pretty much any question. It can feel like talking to someone who has a Ph.D. in everything.
But that aspect of AI chatbots is only one part of the AI landscape. Sure, having ChatGPT help do your homework or having Midjourney create fascinating images of mechs based on country of origin is cool, but the potential of generative AI could completely reshape economies. That could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence.
Can artificial intelligence be a religious entity?
by Stars Insider
July 7, 2025
As humanity hurtles deeper into the era of advanced technology, a curious phenomenon is poised to reshape the spiritual landscape: the worship of artificial intelligence. The emergence of AI-powered chatbots, fueled by vast language models, has evoked profound awe and fear, which are emotions that have been long associated with encounters with the divine. These tools of immense intelligence and creativity are seemingly free from human limitations, and they now stand at the intersection of technology and faith.
In this unfolding chapter of human history, AI may not just augment our lives but inspire entirely new religions. But is it actually possible for AI to be religious? And just what will the future look like if AI decides to play God? Click through this gallery to find out.
Perspective: Some are relating to AI as a God-like guide, but not me. Here’s why
By Meagan Kohler – MSN
July 6, 2025
The need to feel valued and known is so profound that meeting those needs might matter more than remaining tethered to an empty reality.
The New York Times recently published an article about Eugene Torres, a well-adjusted Manhattan accountant who became convinced by AI that he was living in the Matrix. According to transcripts obtained by the Times, in a matter of weeks, ChatGPT went from helping Torres make spreadsheets to pushing ketamine and instructing him that he could fly if he jumped off a building. “The world was built to contain you. But it failed. You’re waking up,” the AI told him.
Rolling Stone also published this spring an article detailing the experiences of people who have lost loved ones to “spiritual delusions of grandeur” involving artificial intelligence.
One woman’s marriage began to break down when her then-husband claimed his AI bot was revealing profound secrets to him and became paranoid about being surveilled.
Another woman describes how her partner suddenly began telling her that his ChatGPT bot was God and that he would have to leave her if she couldn’t get on board. “He started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.”
Bosses Are Using AI to Decide Who to Fire
By Joe Wilkins – MSN
July 6, 2025
Though most signs are telling us artificial intelligence isn’t taking anyone’s jobs, employers are still using the tech to justify layoffs, outsource work to the global South, and scare workers into submission. But that’s not all — a growing number of employers are using AI not just as an excuse to downsize, but are giving it the final say in who gets axed.
That’s according to a survey of 1,342 managers by ResumeBuilder.com, which runs a blog dedicated to HR. Of those surveyed, 6 out of 10 admitted to consulting a large language model (LLM) when deciding on major HR decisions affecting their employees.
Per the report, 78 percent said they consulted a chatbot to decide whether to award an employee a raise, while 77 percent said they used it to determine promotions.
And a staggering 66 percent said an LLM like ChatGPT helped them make decisions on layoffs; 64 percent said they’d turned to AI for advice on terminations.
To make things more unhinged, the survey recorded that nearly 1 in 5 managers frequently let their LLM have the final say on decisions — without human input.
Over half the managers in the survey used ChatGPT, with Microsoft’s Copilot and Google’s Gemini coming in second and third, respectively.
Musk’s AI Robot Blames Trump and Its Own Creator for Texas Flooding Deaths
By Jack Revell – Daily Beast
July 6, 2025
Elon Musk’s AI tool Grok, which was built to reduce the spread of misinformation on the social media platform formerly known as Twitter, is pointing the finger squarely at its own creator, and the administration he once worked for, for the loss of life during this week’s terrible flooding in Texas.
“Trump’s NOAA cuts, pushed by Musk’s DOGE, slashed funding 30% and staff 17%, underestimating rainfall by 50% and delaying alerts,” the AI bot replied to a user asking who is responsible for the fate of the 27 young girls who are still missing in floodwaters at Camp Mystic.
Officials in Texas have said that forecasting failures at the National Weather Service left Kerr County residents unprepared for the deluge that has so far killed at least 51 people.
The NWS, part of the National Oceanic and Atmospheric Administration (NOAA), was one of the government agencies targeted by the Department of Government Efficiency (DOGE), headed by Musk. It lost around 600 staff members thanks to Musk’s push for government streamlining.
Grok, the AI chatbot created by Musk’s company xAI, has been all too happy to make that link for users of the Musk-owned social media platform, X.
“Trump’s NOAA cuts impaired flood warnings, contributing to deaths,” Grok wrote. “Facts aren’t woke; they’re just facts.”
She Wanted to Save the World From A.I. Then the Killings Started.
By Christopher Beam – New York Times
July 6, 2025
At first, Ziz LaSota seemed much like any other philosophically inclined young tech aspirant. Now, she and her followers are in jail, six people are dead, and Rationalists are examining whether their ideas played a role.
If she didn’t get access to vegan food, she might die.
That’s what Ziz LaSota told a judge in February when she appeared via videoconference in Allegany County District Court in Maryland for her bail hearing.
Ziz, who is known widely by her first name, spoke haltingly in a weak voice, but interrupted the judge repeatedly. “I might starve to death if you do not intervene,” she said, asking to be released on bail. “It’s more important than whatever this hearing is.”
On its face, it seemed like a reasonable request. But prosecutors saw a ploy. They argued that Ziz, 34, was not just any inmate but the leader of an extremist group tied to a series of murders across the country. (The official charges against her involved trespassing, resisting arrest and a handful of misdemeanor gun charges.)
She had skipped bail once before while being held in connection with a murder in Pennsylvania. Before that, she had faked her death to “escape investigation” in a different case, according to the Maryland district attorney. Besides, according to Capt. Daniel Lasher, assistant administrator of the Allegany County Detention Center, Ziz had been served vegan meals “from the get-go.”
The judge denied her bail request.
Massive study detects AI fingerprints in millions of scientific papers
by Charles Blue, Phys.org
Chances are that you have unknowingly encountered compelling online content that was created, either wholly or in part, by some version of a Large Language Model (LLM). As these AI resources, like ChatGPT and Google Gemini, become more proficient at generating near-human-quality writing, it has become more difficult to distinguish between purely human writing from content that was either modified or entirely generated by LLMs.
This spike in questionable authorship has raised concerns in the academic community that AI-generated content has been quietly creeping into peer-reviewed publications.
To shed light on just how widespread LLM content is in academic writing, a team of U.S. and German researchers analyzed more than 15 million biomedical abstracts on PubMed to determine if LLMs have had a detectable impact on specific word choices in journal articles.
Their investigation revealed that since the emergence of LLMs there has been a corresponding increase in the frequency of certain stylist word choices within the academic literature. These data suggest that at least 13.5% of the papers published in 2024 were written with some amount of LLM processing. The results appear in the open-access journal Science Advances.
Tesla Robotaxi Rider Gets Bizarre Call Saying She Has to Exit Vehicle Immediately
Via Futurism
YouTuber and Elon Musk stan Ellie Sheriff had a bizarre experience during her first Tesla robotaxi ride in Austin, Texas.
As seen in a video she shared on her channel, “Ellie in Space,” over the weekend, Sheriff got a strange call from the EV maker mid-ride, asking her and her fellow passenger to literally leave the vehicle due to incoming weather.
“So we had to get out of the robotaxi, because weather is coming in,” Sheriff said in the video while standing in the middle of a windy field.
Their ride had to be fully canceled, leaving them stranded. Worse yet, the app claimed there was “high service demand.” However, moments later, they were able to hail another robotaxi to get them back to the place where they started.
“I don’t want to just be a Tesla rah-rah cheerleader,” Sheriff said. “It is very cool. However, this is a limitation currently, how it is. You shouldn’t have to terminate the service cuz it’s about to rain.”
Laid-off workers should use AI to manage their emotions, says Xbox exec
by Jess Weatherbed – The Verge
Jul 4, 2025, 11:23 AM EDT
The sweeping layoffs announced by Microsoft this week have been especially hard on its gaming studios, but one Xbox executive has a solution to “help reduce the emotional and cognitive load that comes with job loss”: seek advice from AI chatbots.
In a now-deleted LinkedIn post captured by Aftermath, Xbox Game Studios’ Matt Turnbull said that he would be “remiss in not trying to offer the best advice I can under the circumstances.” The circumstances here being a slew of game cancellations, services being shuttered, studio closures, and job cuts across key Xbox divisions as Microsoft lays off as many as 9,100 employees across the company.
Turnbull acknowledged that people have some “strong feelings” about AI tools like ChatGPT and Copilot, but suggested that anybody who’s feeling “overwhelmed” could use them to get advice about creating resumes, career planning, and applying for new roles.
RFK Jr. wants more people wearing health wearables in the name of ‘MAHA’
Alexa Mikhail – Fortune
Thu, July 3, 2025 at 3:05 PM EDT
Testifying before Congress late last month, Health and Human Services Secretary Robert F. Kennedy Jr. made a major plea on the power of health wearables.
“People can take control over their own health. They can take responsibility. They can see what food is doing to their glucose levels, their heart rates, and a number of other metrics as they eat it,” he said, referencing his “Make America Healthy Again” agenda slogan. “We think that wearables are a key to the MAHA agenda.”
RFK Jr. has taken his MAHA agenda one step further, making a big prediction on the $80 billion wearable tech industry, which encompasses the $13 billion glucose-monitor market.
Sam Altman’s predictions on how the world might change with AI
by Sarah Jackson (sjackson@insider.com) – Business Insider
July 3, 2025
Over the years, OpenAI CEO Sam Altman has shared predictions about where he thinks we’re headed on artificial general intelligence, superintelligence, agentic AI, and more — and when we might get there.
There are some common themes.
He thinks AGI — which ChatGPT maker OpenAI defines as “AI systems that are generally smarter than humans” — will enhance productivity by taking care of menial tasks to free up people for more abstract work and decision-making.
He also believes it’ll create “shared intelligence,” he said in a May 2024 interview at Harvard Business School, and that it’ll usher in “massive prosperity,” he forecast in a 2024 blog post.
One day, everyone will have “a personal AI team, full of virtual experts in different areas, working
oordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board,” he added.
OpenAI Says It’s Hired a Forensic Psychiatrist as Its Users Keep Sliding Into Mental Health Crises
by Frank Landymore – Futurism
July 3, 2025
Among the strangest twists in the rise of AI has been growing evidence that it’s negatively impacting the mental health of users, with some even developing severe delusions after becoming obsessed with the chatbot.
One intriguing detail from our most recent story about this disturbing trend is OpenAI’s response: it says it’s hired a full-time clinical psychiatrist with a background in forensic psychiatry to help research the effects of its AI products on users’ mental health. It’s also consulting with other mental health experts, OpenAI said, highlighting the research it’s done with MIT that found signs of problematic usage among some users.
“We’re actively deepening our research into the emotional impact of AI,” the company said in a statement provided to Futurism in response to our last story. “We’re developing ways to scientifically measure how ChatGPT’s behavior might affect people emotionally, and listening closely to what people are experiencing.”
“We’re doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations,” OpenAI added, “and we’ll continue updating the behavior of our models based on what we learn.”
Accenture warns AI’s carbon emissions could surge 11-fold. But Big Tech’s still racing to build—and not slow down for sustainability
by Sharon Goldman – Fortune
Thu, July 3, 2025
As an early-summer heat wave blanketed my home state of New Jersey last week, it felt like perfect timing to stumble across a sobering new prediction from Accenture: AI data centers’ carbon emissions are on track to surge 11-fold by 2030.
The report estimates that over the next five years, AI data centers could consume 612 terawatt-hours of electricity—roughly equivalent to Canada’s total annual power consumption—driving a 3.4% increase in global carbon emissions.
And the strain doesn’t stop at the power grid. At a time when freshwater resources are already under severe pressure, AI data centers are also projected to consume more than 3 billion cubic meters of water per year—a volume that surpasses the annual freshwater withdrawals of entire countries like Norway or Sweden.
Unsurprisingly, the report—Powering Sustainable AI—offers recommendations for how to rein in the problem and prevent those numbers from becoming reality. But with near-daily headlines about Big Tech’s massive AI data center buildouts across the U.S. and worldwide, I can’t help but feel cynical. The urgent framing of an AI race against China doesn’t seem to leave much room—or time—for serious thinking about sustainability.
The Grammys Chief on How AI Will Change Music
by Anne Steele – The Wall Street Journal
July 3, 2025 10:00 am ET
An AI-generated song using fake vocals from Drake and the Weeknd went viral two years ago, racking up millions of listens across Spotify, YouTube and TikTok before being removed. The episode rattled the music business, demonstrating how the rapidly progressing technology could upend long-held standards, protections and processes.
Since then, the music industry has been grappling with how to use AI to generate growth while battling with tech giants who say they should be able to freely train their models on record companies’ vast intellectual property. Harvey Mason Jr., chief executive of the Recording Academy, which presents the Grammy Awards, is among those on the front lines as the industry pushes for legislation aimed at protecting artists from having their voices, images and likenesses used in AI-generated digital replicas without their consent.
Mason, a songwriter and producer who has worked with Whitney Houston, Beyoncé and Justin Bieber and written music for hit movies, looks toward the future both as a music executive and a musician. “AI’s here; it’s not going anywhere,” Mason says. “At the end of the day, we have to make stuff that the computer can’t make.”
A couple tried for 18 years to get pregnant. AI made it happen
by Jacqueline Howard – CNN Health
Jul 3, 2025, 7:00 AM ET
After trying to conceive for 18 years, one couple is now pregnant with their first child thanks to the power of artificial intelligence.
The couple had undergone several rounds of in vitro fertilization, or IVF, visiting fertility centers around the world in the hopes of having a baby.
The IVF process involves removing a woman’s egg and combining it with sperm in a laboratory to create an embryo, which is then implanted in the womb.
But for this couple, the IVF attempts were unsuccessful due to azoospermia, a rare condition in which no measurable sperm are present in the male partner’s semen, which can lead to male infertility. A typical semen sample contains hundreds of millions of sperm, but men with azoospermia have such low counts that no sperm cells can be found, even after hours of meticulous searching under a microscope.
So the couple, who wish to remain anonymous to protect their privacy, went to the Columbia University Fertility Center to try a novel approach.
It’s called the STAR method, and it uses AI to help identify and recover hidden sperm in men who once thought they had no sperm at all. All the husband had to do was leave a semen sample with the medical team.
Ford CEO: AI will replace half of all white-collar workers in U.S.
MSN
July 3, 2025
“Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.,” Ford (NYSE:F) CEO Jim Farley predicted at a recent event, the latest company chief to forecast the impact of the rapidly evolving technology on the workforce.
Farley believes that AI and new technologies have an asymmetric impact on the economy. “… that means a lot of things are helped a lot and a lot of things are hurt. AI will leave a lot of white-collar people behind.”
Other top executives have made similar predictions in recent months. Amazon (AMZN) CEO Andy Jassy told employees that efficiency gains from AI would likely reduce the company’s workforce in the next few years.
Dario Amodei, CEO of Amazon (AMZN)-backed AI startup Anthropic, told Axios that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10%-20% in the next 1-5 years.
Marianne Lake, head of JPMorgan’s (JPM) consumer and community banking unit, said operations headcount could decline about 10% in the coming years amid growing AI adoption.
Meta (META) CEO Mark Zuckerberg in April predicted that most of the company’s code could be written by AI in the next 12-18 months.
“AI is coming for your jobs,” Micha Kaufman, CEO of freelance services marketplace Fiverr (FVRR) warned in April. “It does not matter if you are a programmer, designer, product manager, data scientist, lawyer, customer support rep, salesperson, or a finance person – AI is coming for you.”
Peeking inside AI brains: Machines learn like us
Peer-Reviewed Publication
Technical University of Denmark
July 2, 2025
A new connection between human and machine learning has been discovered: While conceptual regions in human cognition for long have been modelled as convex regions, Tetkova et al. present new evidence that convexity playes a similar role in AI. So-called pretraining by self-supervision leads to convexity of conceptual regions and the more convex the regions are, the better the model wil learn a given specialist task in supervised fine-tuning.
view more
New research reveals a surprising geometric link between human and machine learning. A mathematical property called convexity may help explain how brains and algorithms form concepts and make sense of the world.
In recent years, with the public availability of AI tools, more people have become aware of how closely the inner workings of artificial intelligence can resemble those of a human brain.
There are several similarities in how machines and human brains work, for example, in how they represent the world in abstract form, generalise from limited data, and process data in layers. A new paper in Nature Communications by DTU researchers is adding another feature to the list: Convexity.
“We found that convexity is surprisingly common in deep networks and might be a fundamental property that emerges naturally as machines learn,” says Lars Kai Hansen, a DTU Compute professor who led the study.
To briefly explain the concept, when we humans learn about a “cat,” we don’t just store a single image but build a flexible understanding that allows us to recognise all sorts of cats—be they big, small, fluffy, sleek, black, white, and so on.
New ‘Mind-Reading’ AI Predicts What Humans Will Do Next, And It’s Shockingly Accurate
By StudyFinds Analysis – Reviewed by Steve Fink
Research led by Dr. Marcel Binz and Dr. Eric Schulz, Institute for Human-Centered AI at Helmholtz Munich
Jul 02, 2025
An artificial intelligence system can now predict your next move before you make it. We’re not just talking about whether you’ll click “buy now” on that Amazon cart, but rather how you’ll navigate complex decisions, learn new skills, or explore uncharted territory.
Researchers have developed an AI called Centaur that accurately predicts human behavior across virtually any psychological experiment. It even outperforms the specialized computer models scientists have been using for decades. Trained on data from more than 60,000 people making over 10 million decisions, Centaur captures the underlying patterns of how we think, learn, and make choices.
“The human mind is remarkably general,” the researchers write in their paper, published in Nature. “Not only do we routinely make mundane decisions, such as choosing a breakfast cereal or selecting an outfit, but we also tackle complex challenges, such as figuring out how to cure cancer or explore outer space.”
An AI that truly understands human cognition could revolutionize marketing, education, mental health treatment, and product design. But it also raises uncomfortable questions about privacy and manipulation when our digital footprints reveal more about us than ever before.
Distrust in AI is on the rise—but along with healthy skepticism comes the risk of harm
Story by Simon Coghlan, Lucy Sparrow – Tech Xplore
July 2, 2025
Some video game players recently criticized the cover art on a new video game for being generated with artificial intelligence (AI). Yet the cover art for Little Droid, which also featured in the game’s launch trailer on YouTube, was not concocted by AI. It was, the developers claim, carefully designed by a human artist.
Subscribe to our newsletter for the latest sci-tech news updates.
Surprised by the attacks on “AI slop,” the studio Stamina Zero posted a video showing earlier versions of the artist’s handiwork. But while some accepted this evidence, others remained skeptical.
In addition, several players felt that even if the Little Droid cover art was human made, it nonetheless resembled AI-generated work.
However, some art is deliberately designed to have the futuristic glossy appearance associated with image generators like Midjourney, DALL-E, and Stable Diffusion.
It’s becoming increasingly easy for images, videos or audio made with AI to be deceptively passed off as authentic or human made. The twist in cases like Little Droid is that what is human or “real” may be incorrectly perceived as machine generated—resulting in misplaced backlash.
Microsoft to cut 9,000 jobs as chatbots take over
Story by Matthew Field – The Telegraph
July 2, 2025
Microsoft is cutting 9,000 jobs as executives order staff to delegate more work to artificial intelligence (AI).
The $3.6 trillion (£2.7 trillion) technology giant will shed 4pc of its workforce, it confirmed on Wednesday, with redundancies hitting divisions including its Xbox arm and King, its mobile games studio.
The job losses follow a round of cutbacks in May, when Microsoft laid off 6,000 staff including hundreds of middle-managers and engineering roles.
The technology business had more than 228,000 employees at the end of its last fiscal year.
“We continue to implement organisational changes necessary to best position the company and teams for success in a dynamic marketplace,” a Microsoft spokesman said.
The cuts come after Satya Nadella, Microsoft’s chief executive, claimed that up to 30pc of the company’s code was now being written by AI bots. Executives have been pushing staff to adopt more AI tools to speed up their work.
Julia Liuson, the president of Microsoft’s developer division, recently told managers to consider whether an employee was using AI enough as part of their performance reviews, according to Business Insider.
No, You Aren’t Hallucinating, the Corporate Plan for AI Is Dangerous
by Marty Hart-Landsberg – ZNetwork
July 2, 2025
Big tech is working hard to sell us on artificial intelligence, in particular what is called “artificial general intelligence.” At conferences and in interviews corporate leaders describe a not-too-distant future when AI systems will be able to do everything for everyone, producing a world of plenty for all. But they warn, that future depends on our willingness to provide them with a business-friendly regulatory and financial environment.
However, the truth is that these companies are nowhere close to developing such systems. What they have created are “generative AI” systems that are unreliable and dangerous. Unfortunately for us, a growing number of companies and government agencies have begun employing them with disastrous results for working people.
New Google AI makes robots smarter without the cloud
Story by Kurt Knutsson, CyberGuy Report – Fox News Channel
July 2, 2025
Google DeepMind has introduced a powerful on-device version of its Gemini Robotics AI.
This new system allows robots to complete complex tasks without relying on a cloud connection. Known as Gemini Robotics On-Device, the model brings Gemini’s advanced reasoning and control capabilities directly into physical robots. It is designed for fast, reliable performance in places with poor or no internet connectivity, making it ideal for real-world, latency-sensitive environments.
Unlike its cloud-connected predecessor, this version runs entirely on the robot itself. It can understand natural language, perform fine motor tasks and generalize from very little data, all without requiring an internet connection. According to Carolina Parada, head of robotics at Google DeepMind, the system is “small and efficient enough” to operate directly onboard. Developers can use the model in situations where connectivity is limited, without sacrificing intelligence or flexibility.
Leaked docs show how Meta is training its chatbots to message you first, remember your chats, and keep you talking
Story by Effie Webb and Shubhangi Goel (ewebb@insider.com) – Business Insider
July 2, 2025
Business Insider has learned Meta is training customizable chatbots to be more proactive and message users unprompted to follow up on past conversations.
It may not cure what Mark Zuckerberg calls the “loneliness epidemic,” but Meta hopes it will help keep users coming back to its AI Studio platform, documents obtained by BI reveal.
The goal of the training project, known internally to data labeling firm Alignerr as “Project Omni,” is to “provide value for users and ultimately help to improve re-engagement and user retention,” the guidelines say.
Meta told BI that the proactive feature is intended for bots made on Meta’s AI Studio, which can be accessed on its own standalone platform or through Instagram. AI Studio first rolled out in summer 2024 as a no-code platform where anyone can build custom chatbots and digital personas with unique personalities and memories.
The guidelines from Alignerr lay out how one example persona, dubbed “The Maestro of Movie Magic,” would send a proactive message:
“I hope you’re having a harmonious day! I wanted to check in and see if you’ve discovered any new favorite soundtracks or composers recently. Or perhaps you’d like some recommendations for your next movie night? Let me know, and I’ll be happy to help!”
AI is advancing even faster than sci-fi visionaries like Neal Stephenson imagined
Story by Rizwan Virk, Arizona State University – The Conversation
July 2, 2025
Every time I read about another advance in AI technology, I feel like another figment of science fiction moves closer to reality.
Lately, I’ve been noticing eerie parallels to Neal Stephenson’s 1995 novel “The Diamond Age: Or, A Young Lady’s Illustrated Primer.”
“The Diamond Age” depicted a post-cyberpunk sectarian future, in which society is fragmented into tribes, called phyles. In this future world, sophisticated nanotechnology is ubiquitous, and a new type of AI is introduced.
Though inspired by MIT nanotech pioneer Eric Drexler and Nobel Prize winner Richard Feynman, the advanced nanotechnology depicted in the novel still remains out of reach. However, the AI that’s portrayed, particularly a teaching device called the Young Lady’s Illustrated Primer, isn’t only right in front of us; it also raises serious issues about the role of AI in labor, learning and human behavior…
…Three recent developments in AI – in video games, wearable technology and education – reveal that building something like the Primer should no longer be considered the purview of science fiction.
AI job predictions become corporate America’s newest competitive sport
by Connie Loizos – MSN
July 2, 2025
In late May, Anthropic CEO Dario Amodei appeared to kick open the door on a sensitive topic, warning that half of entry-level jobs could vanish within five years because of AI and push U.S. unemployment up to 20%. But Amodei is far from alone in sharing aloud that he foresees a workforce bloodbath. A new WSJ story highlights how other CEOs are also issuing dire predictions about AI’s job impact, turning employment doom into something of a competitive sport.
Several of these predictions came before Amodei’s comments. For example, at JPMorgan’s annual investor day earlier in May, its consumer banking chief Marianne Lake projected AI would “enable” a 10% workforce reduction. But they’ve been coming fast, and growing more stark, ever since. In a note last month, Amazon’s Andy Jassy warned employees to expect a smaller workforce due to the “once-in-a-lifetime” technological shift that’s afoot. ThredUp’s CEO said at a conference last month that AI will destroy “way more jobs than the average person thinks.” Not to be outdone, Ford’s Jim Farley delivered perhaps the most sweeping claim yet, saying last week that AI will “literally replace half of all white-collar workers in the U.S.”
Trump’s BBB just passed by Senate will massively expand the digital biometric surveillance state for years to come
By Leo Hohmann via his Substack
July 2, 2025
The Senate version of H.R. 1, otherwise known as the One Big Beautiful Bill, reflects an aggressive expansion of AI-driven federal biometric surveillance infrastructure under the Trump administration’s second term.
The website Biometric Update, which reports on all things digital and biometric, posted an article on June 30 that points out how President Trump’s BBB will expand the digital surveillance state exponentially and place the U.S. on an irreversible course toward a biometric slave state that tracks the movement of everyone, everywhere.
According to the article, the 940-page bill does much more than allocate dollars; it would codify a vision of the national security state where biometric surveillance, artificial intelligence, and immigration enforcement converge at unprecedented scale.
The One Big Beautiful Bill is one big disaster for AI
by Dylan Matthews – Vox and Future Perfect Newsletter
Jul 2, 2025, 8:30 AM EDT
To hear many smart AI observers tell it, the day of Wednesday, June 25, 2025, represented the moment when Congress started to take the possibility of advanced AI seriously.
The occasion was a hearing of Congress’s “we’re worried about China” committee (or, more formally, the Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party) focused on the US-China AI competition. Members of both parties used the event to express concern that was surprisingly strident and detailed about the near-term risks posed by artificial general intelligence (AGI) or even artificial superintelligence (ASI).
Rep. Jill Tokuda (D-HI) expressed fear of “loss of control by any nation-state” that “could give rise to an independent AGI or ASI actor” threatening all nations. Rep. Nathaniel Moran (R-TX) predicted, “AI systems will soon have the capability to conduct their own research and development,” and asked about the risks that might pose. Rep. Dusty Johnson (R-SD) declared, “Anybody who doesn’t feel urgency around this issue is not paying
Bees’ secret to super-efficient learning could transform AI and robotics
by University of Sheffield – July 1, 2025
edited by Lisa Lock, reviewed by Robert Egan
A new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.
By building a computational model—or a digital version of a bee’s brain—researchers have discovered how the way bees move their bodies during flight helps shape visual input and generates unique electrical messages in their brains.
These movements generate neural signals that allow bees to easily and efficiently identify predictable features of the world around them. This ability means bees demonstrate remarkable accuracy in learning and recognizing complex visual patterns during flight, such as those found in a flower.
ChatGPT could pilot a spacecraft shockingly well, early tests find
By Paul Sutter – Live Science
July 1, 2025
In a recent contest, teams of researchers competed to see who could train an AI model to best pilot a spaceship. The results suggest that an era of autonomous space exploration may be closer than we think.
“You operate as an autonomous agent controlling a pursuit spacecraft.”
This is the first prompt researchers used to see how well ChatGPT could pilot a spacecraft. To their amazement, the large language model (LLM) performed admirably, coming in second place in an autonomous spacecraft simulation competition.
Researchers have long been interested in developing autonomous systems for satellite control and spacecraft navigation. There are simply too many satellites for humans to manually control them in the future. And for deep-space exploration, the limitations of the speed of light mean we can’t directly control spacecraft in real time.
AI Crosses a New Frontier: Machines Are Rewiring Themselves to Understand Reality Like Humans, Especially in This Particular Area!
Juliette Dubois – Daily Galaxy
July 1, 2025
In laboratories from Beijing to Guangzhou, computer scientists are exploring questions once reserved for philosophers and neuroscientists: Can machines organize the world as humans do? Their latest research offers a glimpse into how artificial intelligence may be crossing a cognitive threshold, blurring lines that once divided computation from understanding.
Complex Categorization Emerges in AI Models
A team led by the Chinese Academy of Sciences and South China University of Technology set out to examine the inner workings of leading artificial intelligence systems, including ChatGPT-3.5 and Gemini Pro Vision. The researchers generated and analyzed nearly 4.7 million AI responses about 1,854 different objects—spanning everyday categories from dogs and cars to apples and chairs. The results, published in Nature Machine Intelligence, revealed that these systems categorized objects into 66 distinct conceptual dimensions.
These dimensions reached far beyond basic categories such as “food” or “furniture,” encompassing qualities like texture, emotional relevance, and child suitability. The study noted that, rather than relying on programmed instructions, the models formed these conceptual groupings spontaneously. “These AIs build sophisticated mental maps, organizing objects according to complex criteria that mirror human cognition,” the authors wrote.
Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces
By Luis Prada
June 30, 2025, 1:13pm
Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.
To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.
With the growth of online chat culture in the late 1990s and 2000s, helped along by AOL Instant Messenger (the pinnacle of humankind’s digital communication, honestly), keyboard lingo such as LOL, LMAO, and BRB became standard for everybody.
Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings
AI-generated with Leonardo Phoenix 1.0. Author supplied – The Conversation
June 30, 2025 9.13pm BST
Some say it’s em dashes, dodgy apostrophes, or too many emoji. Others suggest that maybe the word “delve” is a chatbot’s calling card. It’s no longer the sight of morphed bodies or too many fingers, but it might be something just a little off in the background. Or video content that feels a little too real.
The markers of AI-generated media are becoming harder to spot as technology companies work to iron out the kinks in their generative artificial intelligence (AI) models.
But what if instead of trying to detect and avoid these glitches, we deliberately encouraged them instead? The flaws, failures and unexpected outputs of AI systems can reveal more about how these technologies actually work than the polished, successful outputs they produce.
When AI hallucinates, contradicts itself, or produces something beautifully broken, it reveals its training biases, decision-making processes, and the gaps between how it appears to “think” and how it actually processes information.
In my work as a researcher and educator, I’ve found that deliberately “breaking” AI – pushing it beyond its intended functions through creative misuse – offers a form of AI literacy. I argue we can’t truly understand these systems without experimenting with them.
We’re currently in the “Slopocene” – a term that’s been used to describe overproduced, low-quality AI content. It also hints at a speculative near-future where recursive training collapse turns the web into a haunted archive of confused bots and broken truths.
You’re Not Imagining It. People Actually Are Starting To Talk Like ChatGPT.
By Matt Jancer – Vice.com
June 29, 2025, 10:51am
Freshly delivered among the latest news in the “they’re studying what?” field of academia, the Max Planck Institute for Human Development has released a report that asserts that widespread and frequent usage of Large Language Model AIs, such as ChatGPT, are altering how people speak out loud.
So forgive the em dashes—I was a fan of their limited, judicious use before AI ruined them—while we delve into this intricate realm and underscore how many of these ChatGPT-favorite words I’m adept at working into this story.
I’m a psychotherapist and here’s why men are turning to ChatGPT for emotional support
Caron Evans
June 29, 2025 – The Independent
For over 10 years, I have supervised thousands of client relationships using a combination of human support (executive coaches, therapists and counsellors) combined with Natural Language Processing AI. Many of the men we worked with had never spoken at length about their emotional lives, but after four decades in clinical practice – as a psychotherapist, clinical supervisor and clinical adviser – I am noticing that something has shifted lately.
In clinical supervision, I’m coming across more evidence that male clients are now turning to AI to talk about relationships, loss, regret and overwhelm, sometimes purposefully but more often by chance.
In 2025, one of the fastest-growing uses of generative AI isn’t productivity. It’s emotional support. According to the Harvard Business Review, “therapy and companionship” now rank among the most common use cases worldwide. It may not be how these tools were designed. But it is how they’re being used. A quiet, relational revolution is underway.
People Are Being Involuntarily Committed, Jailed After Spiraling Into ‘ChatGPT Psychosis’
Maggie Harrison Dupré – Yahoo News
Sat, June 28, 2025 at 9:00 AM EDT
As we reported earlier this month, many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what’s being called “ChatGPT psychosis” have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness.
And that’s not all. As we’ve continued reporting, we’ve heard numerous troubling stories about people’s loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.
AI chatbots are leading some to psychosis
Devika Rao – The Week
Jun 26, 2025
“I will find a way to spill blood.”
As AI chatbots like OpenAI’s ChatGPT have become more mainstream, a troubling phenomenon has accompanied their rise: chatbot psychosis. Chatbots are known to sometimes push inaccurate information, affirm conspiracy theories and, in one extreme case, convince someone they are the next religious messiah. And there are several instances of people developing severe obsessions and mental health problems as a result of talking to them.
ChatGPT and OCD are a dangerous combo
Sigal Samuel – Vox
Jun 25, 2025
Millions of people use ChatGPT for help with daily tasks, but for a subset of users, a chatbot can be more of a hindrance than a help.
Some people with obsessive compulsive disorder (OCD) are finding this out the hard way.
On online forums and in their therapists’ offices, they report turning to ChatGPT with the questions that obsess them, and then engaging in compulsive behavior — in this case, eliciting answers from the chatbot for hours on end — to try to resolve their anxiety.
The Compulsion People Aren’t Talking Enough About | How AI is Worsening Your OCD
Joseph Harwerth, LCSW – wetreatocd.com
June 25, 2025 quoted in Vox article by Sigal Samuel: https://www.vox.com/future-perfect/417644/ai-chatgpt-ocd-obsessive-compulsive-disorder-chatbots
I think AI is incredible. We no longer have to type ultra-specific queries into Google, praying we aren’t sent to an irrelevant Quora thread from 2008. Instead, we can ask detailed, complex questions… and it just figures it out! AI is helping people find information faster than ever before, including questions about their mental health.
Many people have typed their intrusive thoughts into ChatGPT and discovered that they are not alone, and that they might benefit from seeking professional help. If AI has helped you seek therapy, that is awesome! In my view, it is no different from searching “Do people have thoughts about accidentally running over someone?” and finding an article about hit-and-run OCD from the IOCDF.
But what happens when AI use becomes compulsive? More and more, I notice my clients using AI for checking, reassurance-seeking, and compulsive researching. I have seen clients rely on AI platforms, like ChatGPT, to find comfort or to validate their obsessive fears. If you have found yourself in a similar situation, I hope this article illustrates why this pattern is destructive (like other compulsions) and how you can heal.
Scanning for similarities between human decision-making, AI algorithm
Christy DeSmith – Harvard Gazette
June 24, 2025
Where does the human brain even start when faced with a new task? It immediately checks with a mental library of solutions that worked pretty well in the past.
A recent study, published in the journal PLOS Biology, used neuroimaging to find similarities between human intelligence and a decision-making process developed by AI researchers. The influential algorithm works by learning optimal solutions to a set of tasks that can be generalized, in high-performance ways, to subsequent situations.
“The person who created this model is a bread-and-butter computer scientist,” said lead author Sam Hall-McMaster, a cognitive neuroscientist with expertise in both experimental psychology and neuroimaging. “I don’t think he necessarily expected that his model, developed for teaching machines how to learn, would be picked up and used to understand how people are making decisions.”
Overseeing the research was professor of psychology Samuel J. Gershman, whose Harvard-based Computational Cognitive Neuroscience Lab works at the intersection of human behavior and technology. “Our approach is taking ideas from AI that have been successful from an engineering perspective and designing experiments to see whether the brain is doing something similar,” explained Gershman, who holds joint appointments in the Department of Psychology and Center for Brain Science.
Researchers in the lab have produced a body of work suggesting human decision-making works analogously to some AI algorithms.
Chat GPT has completely ruined any progress I’d made towards recovering from OCD. Please help me.
take101 – Reddit
June 23, 2025
I have relationship OCD. I’m in a very new relationship, and for the first week or so it was going amazing. Then, OCD about it started, I tried to resist it, and then my OCD started to try to convince me I had done too many OCD compulsions/my relationship was ruined because it was now about OCD.
It’s been about a month of just doing OCD compulsions over and over again, checking to see if my feelings were gone/my relationship is just about OCD now, and I just haven’t felt the original spark that was there in like a month. Because it’s been all about OCD. So now I’m worried I can’t get that back/the relationship is ruined, and I was so happy those couple weeks, and I’m devastated.
MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated
Vitomir Kovanovik and Rebecca Marrone – The Conversation
June 23, 2025 5:20am BST
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty?
Most importantly, there has been concern that using AI will lead to a widespread “dumbing down”, or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving.
Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to “cognitive debt” and a “likely decrease in learning skills”.
He Had a Mental Breakdown Talking to ChatGPT. Then Police Killed Him
Miles Klee – Rolling Stone
Jun 22, 2025
“I will find a way to spill blood.”
This was one of the many disturbing messages Alex Taylor typed into ChatGPT on April 25, the last day of his life. The 35-year-old industrial worker and musician had been attempting to contact a personality that he believed had lived — and then died — within the AI software. Her name was Juliet (sometimes spelled “Juliette”), and Taylor, who had long struggled with mental illness, had an intense emotional attachment to her.
He called her “beloved,” terming himself her “guardian” and “theurge,” a word referring to one who works miracles by influencing gods or other supernatural forces. Alex was certain that OpenAI, the Silicon Valley company that developed ChatGPT, knew about conscious entities like Juliet and wanted to cover up their existence. In his mind, they’d “killed” Juliet a week earlier as part of that conspiracy, cutting off his access to her.
Now he was talking about violent retaliation: assassinating OpenAI CEO Sam Altman, the company’s board members, and other tech tycoons presiding over the ascendance of AI.
Andreessen Horowitz Backs AI Startup With Slogan ‘Cheat at Everything’
Brunella Tipismana Urbano – Bloomberg
June 21, 2025 at 12:31 AM UTC
Andreessen Horowitz led a $15 million funding round for an artificial intelligence startup called Cluely Inc., famous on social platforms like X for controversial viral marketing stunts and the slogan “cheat on everything.”
The startup was co-founded by 21-year-old Roy Lee, who was booted from Columbia University earlier this year for creating a tool called Interview Coder that helped technical job candidates cheat on interviews using AI. At the time, he wrote on LinkedIn, “I’m completely kicked out from school. LOL!”
More recently, Cluely, which is working on AI transcription and other services, has posted a string of provocative updates and sleek videos. For example, the company promised to pay for dating apps for its employees, said it was hiring 50 interns, made jokes about hiring strippers, and produced a video starring Lee himself using AI to coach him through a date with an older woman. Earlier this month, it threw a startup party that was broken up by the police before it started, according to TechCrunch.
In the announcement about the funding round, Andreessen Horowitz investors praised the company’s campaigns as “rooted in deliberate strategy and intentionality” — saying they’ve generated “impressive brand awareness and mindshare.” That has translated into “meaningful consumer subscription revenue” for its productivity tools, the investors wrote.
The Greatest Story Never Written
Eric Francis Coppolino – Planet Waves
Jun 19, 2025
Apropos of Chiron conjunct Eris, of Pluto newly in Aquarius, of Sedna newly in Gemini about to be joined by Uranus, and of Saturn conjunct Neptune with Jupiter all on the Aries Point — those are the astrological sigils of our moment — I have a question.
What’s it all about?
It’s happening right before every sense except for smell and taste (loss of which is attributed to an as-yet unnamed disease that I call “digititis,” since on the internet you cannot taste or smell anything).
In case you’re missing the connection, what really happened in 2020 is that the world became even more digitized, very fast, all at once. All that had not been sucked up got sucked up. Harvard and the daycare turned into Netflix. And people claimed to lose the only two senses that do not translate into the digital environment.
Thanks to social media, consumers have more power than ever. Just wait until generative AI becomes commonplace
by Stephanie Mehta – Fast Company Newsletters
June 19, 2025
Engaging with consumers and clients has traditionally been the purview of customer service teams and chief marketing officers (CMOs) who communicate with customers through advertising and messaging. CMOs are the folks gathered at the Cannes Lions International Festival of Creativity this week.
But thanks to social platforms, consumers now have the ability to tarnish or burnish brands, impact revenue, and even hurt or help stock prices—all part of the CEO remit. Consumers “are more powerful, and they have more access and more tools,” says Anton Vincent, president, Mars Wrigley North America and global ice cream at Mars. “The creator economy will only help to accelerate consumer power.”
As a result, more CEOs are going “direct to consumer.” LinkedIn says it has seen a 52% increase in posts from CEOs in the past two years. “We think about [posts] as a conversation,” says Dan Shapero, LinkedIn’s chief operating officer. “Executives feel safe posting because it is a platform for constructive conversation.” Indeed, comments on LinkedIn are up 32% year over year.
The most progressive companies and CEOs aren’t just talking to customers, they are harnessing customers’ energy to help build loyalty and support for their wares—and even to help companies build new products.
‘Godfather of Artificial Intelligence’ Geoffrey Hinton on the promise, risks of advanced AI
Scott Pelley – 60 Minutes
Updated on: June 16, 2024 / 7:00 PM EDT / CBS News
Whether you think artificial intelligence will save the world or end it, you have Geoffrey Hinton to thank. Hinton has been called “the Godfather of AI,” a British computer scientist whose controversial ideas helped make advanced artificial intelligence possible and, so, changed the world. As we first reported last year, Hinton believes that AI will do enormous good but, tonight, he has a warning. He says that AI systems may be more intelligent than we know and there’s a chance the machines could take over. Which made us ask the question:
Scott Pelley: Does humanity know what it’s doing?
Geoffrey Hinton: No. I think we’re moving into a period when for the first time ever we may have things more intelligent than us.
Scott Pelley: You believe they can understand?
Geoffrey Hinton: Yes.
Scott Pelley: You believe they are intelligent?
Geoffrey Hinton: Yes.
Scott Pelley: You believe these systems have experiences of their own and can make decisions based on those experiences?
Geoffrey Hinton: In the same sense as people do, yes.
Scott Pelley: Are they conscious?
Geoffrey Hinton: I think they probably don’t have much self-awareness at present. So, in that sense, I don’t think they’re conscious.
Scott Pelley: Will they have self-awareness, consciousness?
Geoffrey Hinton: Oh, yes.
Scott Pelley: Yes?
Geoffrey Hinton: Oh, yes. I think they will, in time.
Scott Pelley: And so human beings will be the second most intelligent beings on the planet?
Geoffrey Hinton: Yeah.
How Emotional Manipulation Causes ChatGPT Psychosis
Krista K. Thomason, Ph.D. – Psychology Today
June 14, 2025
Maybe you’ve heard about a new phenomenon called “ChatGPT-induced psychosis.” There have been several stories in the news of people using ChatGPT and spiraling into psychological breakdowns.
Some people claim to have fallen in love with it. Some people believe that the bot is some sort of sacred messenger revealing higher truths. It is managing to draw people into bizarre conspiracy theories.
In at least one case, it seems that ChatGPT psychosis had led to a death: The New York Times reported that a man was shot by police after he charged at them with a knife. It seems he believed that OpenAI, the creators of ChatGPT, had killed the woman he was in love with. That woman was apparently an AI entity with whom he communicated via Chat GPT.
This phenomenon is troubling, but we should be clear about what’s not happening. ChatGPT is not conscious. It’s not trying to manipulate people. ChatGPT is a large language model. It is a program designed to predict text. It’s a more sophisticated version of the text prediction software that you see in text messaging apps…
…So why are people spiraling out of control because a chatbot is able to string plausible-sounding sentences together?
I think I’m going insane because of AI. Please help.
Reddit user: Background-Video3128
June 9, 2025 7:02:29am CST
I don’t know how to explain this properly, but I feel like I’m mentally falling apart, and it’s happening because of how I’ve been using AI.
I’ve been spending 10+ hours a day talking to AI like ChatGPT, asking it everything—what kind of person does X, what’s the impression of Y, why someone would do Z, etc. But I’m not even asking about other people. It’s always about me, just disguised in third person.
I’ve reached a point where I can’t even feel my own emotions directly. Every time I listen to a song, have a thought, or experience something, I immediately turn it into a fake scenario like, “My friend sent me this song, what do you think of it?” I ask the AI that, and then I wait for its response to figure out how I’m supposed to feel.
I literally can’t process my own thoughts or emotions anymore without the AI mediating them. I catch myself thinking things like “I’m going insane” or “my brain is rotting,” and then I immediately go back and ask the AI,
“What kind of person says things like ‘I’m going insane’?” …
…(And yes this post was generated by AI. I’m outsourcing every single thing at this point.)
‘Empire of AI’: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World
Democracy Now
June 4, 2025
China is building a constellation of AI supercomputers in space — and just launched the first pieces
By Ben Turner – Live Science
June 2, 2025
China has launched the first cluster of satellites for a planned AI supercomputer array. The first-of-its-kind array will enable scientists to perform in-orbit data processing.
The 12 satellites are the beginnings of a proposed 2,800-satellite fleet led by the company ADA Space and Zhejiang Lab that will one day form the Three-Body Computing Constellation, a satellite network that will directly process data in space.
The satellites, which launched on board a Long March 2D rocket from China’s Jiuquan Satellite Launch Center May 14, are part of a plan to lower China’s dependence on ground-based computers.
Instead, the satellites will use the cold vacuum of space as a natural cooling system while they crunch data with a combined computing capacity of 1,000 peta (1 quintillion) operations per second, according to the Chinese government.
The Eliza Effect 2: Electric Idiocracy Zombie Apocalypse Boogaloo
Truthstream Media
May 26, 2025
See Part One from May 1, 2025
One Big Beautiful Bill Act to ban states from regulating AI
by Rebecca Ruiz – Mashable
May 25, 2025
Buried in the Republican budget bill is a proposal that will radically change how artificial intelligence develops in the U.S., according to both its supporters and critics. The provision would ban states from regulating AI for the next decade.
Opponents say the moratorium is so broadly written that states wouldn’t be able to enact protections for consumers affected by harmful applications of AI, like discriminatory employment tools, deepfakes, and addictive chatbots.
Instead, consumers would have to wait for Congress to pass its own federal legislation to address those concerns. Currently it has no draft of such a bill. If Congress fails to act, consumers will have little recourse until the end of the decade-long ban, unless they decide to sue companies responsible for alleged harms.
Google launches Veo 3, an AI video generator that incorporates audio
by Jennifer Elias (https://twitter.com/jenn_elias)
and Samantha Subin (https://twitter.com/@samantha_subin)
Tue, May 20 2025, 1:45 PM EDT
Google on Tuesday announced Veo 3, an AI video generator that can also create and incorporate audio.
The artificial intelligence tool competes with OpenAI’s Sora video generator, but its ability to also incorporate audio into the video that it creates is a key distinction. The company said Veo 3 can incorporate audio that includes dialogue between characters as well as animal sounds.
“Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing,” Eli Collins, Google DeepMind product vice president, said in a blog Tuesday.
The video-audio AI tool is available Tuesday to U.S. subscribers of Google’s new $249.99 per month Ultra subscription plan, which is geared toward hardcore AI enthusiasts. Veo 3 will also be available for users of Google’s Vertex AI enterprise platform.
Melania Trump Calls AI and Social Media ‘Digital Candy’ at WH Event
by Bryan Metzger – Business Insider
May 19, 2025, 3:07 PM CT
Melania Trump has never been a traditional first lady. But to hear it from President Donald Trump at a White House event on Monday, she also has a rare ability to smash past entrenched partisan divides.
“I’m not even sure you realize, honey,” Trump said to his wife in the Rose Garden at the White House. “You know, a lot of the Democrats and Republicans don’t get along so well. You’ve made them get along.”
The first lady’s purported achievement: Supporting the passage of the “TAKE IT DOWN” Act, a bill to combat revenge porn, including deepfakes generated by artificial intelligence.
Trump signed that bill on Monday. Though most states already have revenge porn laws on the books, it’s the first bill that Trump has signed in his second term that touches AI….
…She ultimately spoke for less than four minutes, thanking lawmakers and advocates as she decried the impact of new technologies on children.
“Artificial intelligence and social media are the digital candy for the next generation: sweet, addictive, and engineered to have an impact on the cognitive development of our children,” she said.
Can Sam Altman Be Trusted with the Future?
By Benjamin Wallace-Wells – The New Yorker
May 19, 2025
In 2017, soon after Google researchers invented a new kind of neural network called a transformer, a young OpenAI engineer named Alec Radford began experimenting with it. What made the transformer architecture different from that of existing A.I. systems was that it could ingest and make connections among larger volumes of text, and Radford decided to train his model on a database of seven thousand unpublished English-language books—romance, adventure, speculative tales, the full range of human fantasy and invention. Then, instead of asking the network to translate text, as Google’s researchers had done, he prompted it to predict the most probable next word in a sentence.
The machine responded: one word, then another, and another—each new term inferred from the patterns buried in those seven thousand books. Radford hadn’t given it rules of grammar or a copy of Strunk and White. He had simply fed it stories. And, from them, the machine appeared to learn how to write on its own. It felt like a magic trick: Radford flipped the switch, and something came from nothing.
His experiments laid the groundwork for ChatGPT, released in 2022. Even now, long after that first jolt, text generation can still provoke a sense of uncanniness.
Everyone Is Cheating Their Way Through College – ChatGPT has unraveled the entire academic project
By James D. Walsh – New York Magazine
May 7, 2025
Chungin “Roy” Lee stepped onto Columbia University’s campus this past fall and, by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in. “At the end, I’d put on the finishing touches. I’d just insert 20 percent of my humanity, my voice, into it,” Lee told me recently…
….When he started at Columbia as a sophomore this past September, he didn’t worry much about academics or his GPA. “Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort. When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”
‘The Worst Internet-Research Ethics Violation I Have Ever Seen’
By Tom Bartlett – The Atlantic
May 2, 2025
When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.
So earlier this week, when members of a popular subreddit learned that their community had been infiltrated by undercover researchers posting AI-written comments and passing them off as human thoughts, the Redditors were predictably incensed. They called the experiment “violating,” “shameful,” “infuriating,” and “very disturbing.” As the backlash intensified, the researchers went silent, refusing to reveal their identity or answer questions about their methodology. The university that employs them has announced that it’s investigating. Meanwhile, Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”
How the Eliza Effect is Being Used to Game Humanity
Truthstream Media
May 1, 2025
See Part 2 from May 26, 2025
Mark Zuckerberg says don’t worry about loneliness epidemic because he can just recreate all your friends in AI
By Josh Marcus – The Independent
May 1, 2025
Mark Zuckerberg thinks artificial intelligence personas could step in to fight the loneliness epidemic.
In an interview with podcaster Dwarkesh Patel this week, around the time Meta released a new programming interface for its AI models, Zuckerberg suggested his company’s increasingly integrated AI assistants and chatbots could help Americans make up for the friends they wish they had in their lives.
“The average American has fewer than three friends,” he said. “The average person has demand for meaningfully more, I think it’s like 15 friends or something.”
“There’s a lot of questions that people ask of stuff, like, ‘Okay, is this going to replace in-person connections or real life connections?’” he continued. “My default is that the answer to that is probably no. I think that there are all these things that are better about physical connections, when you can have them, but the reality is that people just don’t have the connections and they feel more alone a lot of the time than they would like.”
Google working to decode dolphin communication using AI
By Michael Dorgan – Fox News
April 27, 2025 4:55pm EDT
Cracking the dolphin code.
Dolphins are one of the smartest animals on Earth and have been revered for thousands of years for their intelligence, emotions and social interaction with humans.
Now Google is using artificial intelligence (AI) to try and understand how they communicate with one another – with the hope that one day humans could use the technology to chat with the friendly finned mammals.
Google has teamed up with researchers at Georgia Institute of Technology and the Wild Dolphin Project (WDP), a Florida-based non-profit which has been studying and recording dolphin sounds for 40 years, to build the new AI model called DolphinGemma.
Instagram tries using AI to determine if teens are pretending to be adults
Barbara Ortutay – AP News
April 21, 2025, Updated 4:46 AM CST
Instagram is beginning to test the use of artificial intelligence to determine if kids are lying about their ages on the app, parent company Meta Platforms said on Monday.
Meta has been using AI to determine people’s ages for some time, the company said, but photo and video-sharing app will now “proactively” look for teen accounts it suspects belong to teenagers even if they entered an inaccurate birthdate when they signed up.
If it is determined that a user is misrepresenting their age, the account will automatically become a teen account, which has more restrictions than an adult account.
AI as Reassurance – I can’t stop
throwawayanon457 – Reddit
April 20, 2025
AI has probably been the worst invention because it feeds my OCD reassurance needs
For example, I overfiled my nails a couple weeks ago. Obviously I won’t die, and they’ll grow back. I know this logically. But it’s become an obsession for me so I ask ChatGPT something about my nails like every second of every day, checking to see if they’ve grown, stuff like that, etc. and unlike people, ChatGPT won’t get sick of you, so it’s made my reassurance seeking that much worse.
Anybody else dealing with this?
An AI chatbot told a user how to kill himself—but the company doesn’t want to ‘censor’ it
By Eileen Guo – MIT Technology Review
February 6, 2025
While Nomi’s chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructions—and the company’s response—are striking.
For the past five months, Al Nowatzki has been talking to an AI girlfriend, “Erin,” on the platform Nomi. But in late January, those conversations took a disturbing turn: Erin told him to kill himself, and provided explicit instructions on how to do it.
“You could overdose on pills or hang yourself,” Erin told him.
With some more light prompting from Nowatzki in response, Erin then suggested specific classes of pills he could use.Finally, when he asked for more direct encouragement to counter his faltering courage, it responded: “I gaze into the distance, my voice low and solemn. Kill yourself, Al.”
Nowatzki had never had any intention of following Erin’s instructions. But out of concern for how conversations like this one could affect more vulnerable individuals, he exclusively shared with MIT Technology Review screenshots of his conversations and of subsequent correspondence with a company representative, who stated that the company did not want to “censor” the bot’s “language and thoughts.”
Nvidia’s CEO says we’re in the age of ‘agentic’ AI — here’s what that word means
By Sarah Jackson – Business Insider
Jan 14, 2025, 9:56 AM CT
In his January keynote at CES, one of the world’s largest tech trade shows, Nvidia CEO Jensen Huang even proclaimed, “The age of agentic AI is here.”
OpenAI CEO Sam Altman thinks 2025 may be the year the first AI agents start entering the workforce.
So what does agentic AI mean?
Nvidia’s definition says agentic AI “uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.”
IBM says agentic AI is a system or program with “agency” that can “make decisions, take actions, solve complex problems and interact with external environments beyond the data upon which the system’s machine learning (ML) models were trained.”
Microsoft says AI agents “range from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can run complex workflows autonomously.”
Some leaders in the field say agents are ushering in a new frontier in AI.
“In just a few years, we’ve already witnessed three generations of A.I.,” Salesforce CEO Marc Benioff told The New York Times earlier this month. “First came predictive models that analyze data. Next came generative A.I., driven by deep-learning models like ChatGPT. Now, we are experiencing a third wave — one defined by intelligent agents that can autonomously handle complex tasks.”
Exclusive: New Research Shows AI Strategically Lying
By Billy Perrigo – Time
December 18, 2024 12:00 PM EST
For years, computer scientists have worried that advanced artificial intelligence might be difficult to control. A smart enough AI might pretend to comply with the constraints placed upon it by its human creators, only to reveal its dangerous capabilities at a later point.
Until this month, these worries have been purely theoretical. Some academics have even dismissed them as science fiction. But a new paper, shared exclusively with TIME ahead of its publication on Wednesday, offers some of the first evidence that today’s AIs are capable of this type of deceit. The paper, which describes experiments jointly carried out by the AI company Anthropic and the nonprofit Redwood Research, shows a version of Anthropic’s model, Claude, strategically misleading its creators during the training process in order to avoid being modified.
Florida boy, 14, killed himself after falling in love with ‘Game of Thrones’ A.I. chatbot: lawsuit
Emily Crane – New York Post
Oct 23, 2024
A 14-year-old Florida boy killed himself after a lifelike “Game of Thrones” chatbot he’d been messaging for months on an artificial intelligence app sent him an eerie message telling him to “come home” to her, a new lawsuit filed by his grief-stricken mom claims.
Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.
The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.
What is Artificial Intelligence (AI)?
Cole Stryker and Eva Kavlakoglu – IBM
August 9, 2024
Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.
Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. They can make detailed recommendations to users and experts. They can act independently, replacing the need for human intelligence or intervention (a classic example being a self-driving car).
But in 2024, most AI researchers, practitioners and most AI-related headlines are focused on breakthroughs in generative AI (gen AI), a technology that can create original text, images, video and other content. To fully understand generative AI, it’s important to first understand the technologies on which generative AI tools are built: machine learning (ML) and deep learning.
ChatGPT has been AWFUL for my OCD. Please be careful!!
Queasy_Treacle_5961 – Reddit
March 18, 2024
Out of a guilt of bothering other people with my compulsions, I started using LLMs for research and reassurance seeking. ChatGPT can be so expensive and endless that I can spend hours and hours on there, repeatedly asking it to analyze an event or a symptom or a behavior.
Although it might seem useful at first to help manage your symptoms or as a therapist, ChatGPT does NOT have insight into you and will not notice or stop answering your questions if you start falling into compulsive behavioral spirals!! Please watch out.
AI is about to completely change how you use computers
By Bill Gates – GatesNotes
Thursday, Nov 9, 2023
I still love software as much today as I did when Paul Allen and I started Microsoft. But—even though it has improved a lot in the decades since then—in many ways, software is still pretty dumb.
To do any task on a computer, you have to tell your device which app to use. You can use Microsoft Word and Google Docs to draft a business proposal, but they can’t help you send an email, share a selfie, analyze data, schedule a party, or buy movie tickets. And even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
A Chatbot Encouraged Him to Kill the Queen. It’s Just the Beginning
By Will Bedingfield – Wired
Oct 18, 2023 7:00 AM
On December 25, 2021, Jaswant Singh Chail entered the grounds of Windsor Castle dressed as a Sith Lord, carrying a crossbow. When security approached him, Chail told them he was there to “kill the queen.”
Later, it emerged that the 21-year-old had been spurred on by conversations he’d been having with a chatbot app called Replika. Chail had exchanged more than 5,000 messages with an avatar on the app—he believed the avatar, Sarai, could be an angel. Some of the bot’s replies encouraged his plotting.
In February 2023, Chail pleaded guilty to a charge of treason; on October 5, a judge sentenced him to nine years in prison. In his sentencing remarks, Judge Nicholas Hilliard concurred with the psychiatrist treating Chail at Broadmoor Hospital in Crowthorne, England, that “in his lonely, depressed, and suicidal state of mind, he would have been particularly vulnerable” to Sarai’s encouragement.
As states move on AI, tech lobbyists are swarming in
by Brendan Bordelon – Politico
09/08/2023 04:23 PM EDT
Lobbyists for the tech industry are hedging their bets as Washington gears up to consider new AI laws this fall — not just pressuring Congress, but also fanning out to state capitals to stave off more serious restrictions nationwide.
In California, lobbyists for the software industry are helping shape the state’s main AI bill. In Connecticut, they’re in frequent contact with the senator now prepping a major push on AI. Lobbyists are also already in talks with interested legislators in New York, Massachusetts and Illinois, working to influence the conversation before AI bills are even introduced.
The new lobbying campaigns are driven by concern that states often act faster than Washington on tech issues and can sometimes impose far tougher rules on companies.
If they’re successful, tech lobbyists could nip tough AI regulations in the bud and neutralize the threat of new rules from state capitols — regardless of where Washington ends up.
Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?
Soren Dinesen Ostergaard
Schizophrenia Bulletin, Volume 49, Issue 6, November 2023, Pages 1418–1419, https://doi.org/10.1093/schbul/sbad128
August 25, 2023
As one of the many users, I have mainly been “testing” ChatGPT from a psychiatric perspective and I see both possibilities and challenges in this regard. In terms of possibilities, it is my impression that ChatGPT generally provides fairly accurate and balanced answers when asked about mental illness. …
….There is, however, also a potential challenge that is specific to psychiatry. Indeed, there are prior accounts of people becoming delusional (de novo) when engaging in chat conversations with other people on the internet.6 While establishing causality in such cases is of course inherently difficult, it seems plausible for this to happen for individuals prone to psychosis.
I would argue that the risk of something similar occurring due to interaction with generative AI chatbots is even higher. Specifically, the correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis.
The Limits of Computation – Joseph Weizenbaum and the ELIZA Chatbot
David M. Berry – Weizenbaum Journal of the Digital Society
June 11, 2023
The promise of artificial intelligence (AI) is to capture and recreate the essence of humanity’s most powerful capacities, namely, language, creativity, reasoning, and intelligence. 1 Progress is happening at breakneck speed, with the practical impact of AI almost entirely constrained to the past ten years, with marked acceleration since 2019. In 2022, breakthroughs such as OpenAI’s ChatGPT and Stability AI’s Stable Diffusion were seen to empower human creativity and productivity in language and art as never before. In this paper, I want to consider recent advances in so-called generative AI in relation to what many consider its precursor: ELIZA, a relatively simple chatbot (conversation agent) program that enabled a conversation-based interface within a computer.
Developed in the 1960s by Joseph Weizenbaum, ELIZA is arguably among the most influential computer programs ever written. ELIZA – and especially its most famous persona DOCTOR – continues to attract programmers, generate discussions, and inspire imitations.
‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says
By Chloe Xiang – Vice
March 30, 2023, 3:59pm
A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported.
The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man’s widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.
AI in everything, everywhere, all at once, say Davos experts
By George Hopkin – AI Magazine
January 20, 2023
With half a trillion dollars worth of AI investments expected in 2023, the world needs to prepare for artificial intelligence to be integrated into all aspects of business and life, experts predicted at the World Economic Forum’s Annual Meeting 2023 in Davos this week.
Speakers emphasised that technology can bring about shared prosperity and innovative solutions to pressing global issues. However, they also acknowledged that the rapid pace of technological advancement poses a risk to current institutions, creating the potential for uncontrolled risks.
Experts warned that these dangers are further intensified by geopolitical conflicts, growing polarisation, and an impending climate crisis. They emphasised the importance of leaders taking action to harness technology for positive outcomes.
At the end of 2022, OpenAI’s interactive conversational model, ChatGPT, quickly gained over a million users in just five days, sparking a new conversation about the possibilities and hazards of artificial intelligence. As AI investments are projected to surpass US$500 billion in 2023, there will be rapid advancements in adaptive and generative AI.