Tristan Harris and Aza Raskin claim that all our ‘Dreams’ and ‘Thoughts’ can be Decoded—imagine what authoritarian states could do with such power. AI can even read Wi-Fi radio signals bouncing around a space to create 3D image estimations.
Tristan Harris and Aza Raskin are the co-founders of the Center for Humane Technology which aims to warn people about how technology affects individuals, organizations, and society. It looks for ways to manage these effects, encourages leaders to take action, and offers resources for those interested in humane technology.
FYI: Harris was Google’s Design Ethicist, developing a framework for how technology should “ethically” steer the thoughts and actions of billions of people from screens. Raskin is a trained mathematician and dark matter physicist. He is the son of Jef Raskin, a human–computer interface expert who was the initiator of the Macintosh project at Apple.
In May 2023, Harris & Raskin attended Summit At Sea where they discussed how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.
This video educated me about the serious consequences we might face if greater control over advanced AI models—specifically generative models like Multimodal Large Language Models (MLLMs)—isn’t implemented.
Tristan Harris explains the complex terms behind “generative large language multimodal models” (AI models that handle language and images). To make things simpler, he suggests calling them “Golems.” This is inspired by the Golem in Jewish mythology—a creature that comes to life and gains unexpected abilities, much like these AI models. As more data is fed to them, they start displaying new skills even their creators didn’t anticipate.
Although many AI demos look different on the surface, they often function in similar ways. For example, tools like Stable Diffusion or DALL-E let users type words to generate images. If you type “Google soup,” the model turns it into a picture. However, Raskin points out that these “Golems” don’t truly “understand” what they’re doing; they process information without real comprehension of meaning.
Your Dreams And Thoughts Are No Longer Safe
Researchers Harris & Raskin discuss here how to turn brain patterns into images by studying brain scans from people looking at pictures. Raskin explains a person sits in an fMRI machine, which records brain activity while they look at an image. The computer learns to match each brain pattern with the image the person is viewing.
When the computer is shown only the brain patterns (without seeing the actual image), it tries to guess what the person saw. For example, if someone looked at a giraffe, the computer would use only the brain data to create an image it believes represents the giraffe.
Your Dreams And Thoughts Are No Longer Safe
Researchers Harris & Raskin discuss here how to turn brain patterns into images by studying brain scans from people looking at pictures.
Raskin explains a person sits in an fMRI machine, which records brain activity while they look at… pic.twitter.com/1qqAbfMpBQ
— Vicky Verma (@Unexplained2020) October 30, 2024
The latest research uses “stable diffusion” technology, a tool used in digital art creation, to make these brain-reading images even clearer. Surprisingly, tools made for art are now helping decode human thoughts.
The research has advanced to the point where it can capture people’s inner dialogue—the words they think to themselves. In one experiment, people watched a video and thought about what they saw. The computer was able to recreate their thoughts, like “a girl gets hit and falls,” based solely on brain patterns.
This kind of brain decoding could raise concerns, especially in authoritarian States, since it could lead to tracking what people think or dream about.
WIFI Radio signals led to tracking Living Beings
This explains how AI can use Wi-Fi signals, like the ones from your home router, to locate and track people in a room. Think of it like sonar: the Wi-Fi signals bounce around, similar to how sonar works. In this experiment, they first used a camera along with Wi-Fi signals to track where people were. The camera acted like one “eye,” and the Wi-Fi signals were the “other eye.”
After the AI learned how to recognize where people were, they removed the camera, leaving only the Wi-Fi signals. Now, just by analyzing the way Wi-Fi signals bounce, the AI could figure out people’s positions in 3D and in real-time—even in the dark. Essentially, this technology could turn any Wi-Fi router into a sort of “camera” that can track movement without needing light.
WIFI Radio signals led to tracking Living Beings
This explains how AI can use Wi-Fi signals, like the ones from your home router, to locate and track people in a room. Think of it like sonar: the Wi-Fi signals bounce around, similar to how sonar works. In this experiment, they… pic.twitter.com/GDTPCN2sWA
— Vicky Verma (@Unexplained2020) October 30, 2024
Hacking Wi-Fi routers would be a complicated task. However, computer code is like a language. Raskin shares a real example where they asked an AI (like GPT) to find a security weakness in some code from a mail server and to write a script to take advantage of that weakness. In just about 10 seconds, the AI provided a code that could exploit it.
While AI can’t yet be used to hack Wi-Fi routers directly, Raskin believes that in the near future—whether in one, two, or five years—it will become much easier to use existing technology for widespread surveillance. Essentially, take it as a warning that as AI improves, it could turn all the hardware we have into powerful tools for monitoring people.
AI can copy you and generate your imagery, voice & full appearance in the most natural way possible
If we want to manage and control technology better, we need to understand its effects. For instance, new AI can copy someone’s voice using just three seconds of audio. There are examples of this, like how a computer can make a person sound a bit robotic after those initial three seconds, and then it generates the rest of the speech on its own.
One scary thought is that someone could call your child, record their voice, and then use it to trick you into thinking it’s really them calling for help. This has already happened in real life, like when a teenager’s voice was cloned for a million-dollar kidnapping scam.
People are already used to seeing changes in social media, where you can make photos look better or change voices instantly. Filters on apps like TikTok have evolved so much that they can create very realistic images of people.
AI can copy you and generate your imagery, voice & full appearance in the most natural way possible
If we want to manage and control technology better, we need to understand its effects. For instance, new AI can copy someone’s voice using just three seconds of audio. There are… pic.twitter.com/y6t5CBm0ZU
— Vicky Verma (@Unexplained2020) October 30, 2024
For example, one influencer made a virtual version of herself to use as a kind of digital friend, allowing people to interact with her avatar. This means that in today’s world, it’s hard to trust what you see or hear, even in videos.
Read also:
-
- This Mexican Scientist Claimed We Live in a Holographic M atrix, Disappears Mysteriously After It
- This Man Claims We May Live At Center Of Cosmic Void 2 Billion Light-Years WideThat Shouldn’t Exist
- Man from Silicon Valley Confirms His Company Established the First-Ever Communication Between People in Dreams
- This Man from MIT Said the Brain Is Like Logic Gates, and God’s Ideas Are Mathematical Logic
Banks and other institutions still think they can confirm your identity by recognizing your voice or face on video calls, but that doesn’t work anymore because of AI technology. This year marks a big change: photographic and video evidence may no longer be reliable, and we need to catch up to this reality.
2024 will be the last human election
Democracy relies on language, and our society operates through it. Language shapes law, code, and religion. A point was made in a New York Times article by an author named Yuval Harari, who wrote Sapiens. The idea presented was that if someone can manipulate language, they can control how people think and act.
Someone asks an AI like GPT to explain biblical events in light of current events. This means taking religious ideas and using them to make sense of what’s happening in the world today. By doing this, you can reshape people’s understanding of reality and relationships. Language is what makes us human, and technology allows us to alter it.
2024 will be the last human election
Democracy relies on language, and our society operates through it. Language shapes law, code, and religion. A point was made in a New York Times article by an author named Yuval Harari, who wrote Sapiens. The idea presented was that if… pic.twitter.com/AYmD2FbDWv
— Vicky Verma (@Unexplained2020) October 30, 2024
Yuval compared the power of nuclear weapons in the real world to the impact of artificial intelligence (AI) in the virtual world. He noted that the last time a non-human force created large, influential belief systems was with religion. Now, AI has the potential to form significant narratives that can deeply affect how people think.
Looking ahead, Harris & Raskin predicted that the 2024 election could be the last one mostly controlled by humans. This doesn’t mean there will be robots taking over, but it suggests that since 2008, political campaigns have already been using technology to test which messages work best with voters. By 2028, it’s expected that the effectiveness of content created by AI will greatly surpass what humans can produce, making human-generated messages less impactful in elections.