- Experts warn artificial intelligence (AI) may destroy mankind and civilization as we know it unless we rein in the development and deployment of AI and start putting in some safeguards
- The public also needs to temper expectations and realize that AI chatbots are still massively flawed and cannot be relied upon. An attorney recently discovered this the hard way, when he had ChatGPT do his legal research. None of the case law ChatGPT cited was real
- In 2022, Facebook pulled its science-focused chatbot Galactica after a mere three days, as it generated wholly fabricated results
- The unregulated deployment of autonomous AI weapons systems is perhaps among the most alarming developments. Foreign policy experts warn that autonomous weapons technologies will destabilize current nuclear strategies and increase the risk of preemptive attacks. They could also be combined with chemical, biological, radiological and nuclear weapons, thereby posing an existential threat
- AI may also pose a significant threat to biosecurity. MIT students have demonstrated that large language model chatbots can allow anyone to design bioweapons in as little as an hour
Will artificial intelligence (AI) wipe out mankind? Could it create the “perfect” lethal bioweapon to decimate the population?1,2 Might it take over our weapons,3,4 or initiate cyberattacks on critical infrastructure, such as the electric grid?5
According to a rapidly growing number of experts, any one of these, and other hellish scenarios, are entirely plausible, unless we rein in the development and deployment of AI and start putting in some safeguards.
The public also needs to temper expectations and realize that AI chatbots are still massively flawed and cannot be relied upon, no matter how “smart” they appear, or how much they berate you for doubting them.
George Orwell’s Warning
The video at the top of this article features a snippet of one of the last interviews George Orwell gave before dying, in which he stated that his book, “1984,” which he described as a parody, could well come true, as this was the direction in which the world was going.
Today, it’s clear to see that we haven’t changed course, so the probability of “1984” becoming reality is now greater than ever. According to Orwell, there is only one way to ensure his dystopian vision won’t come true, and that is by not letting it happen. “It depends on you,” he said.
As artificial general intelligence (AGI) is getting nearer by the day, so are the final puzzle pieces of the technocratic, transhumanist dream nurtured by globalists for decades. They intend to create a world in which AI controls and subjugates the masses while they alone get to reap the benefits — wealth, power and life outside the control grid — and they will get it, unless we wise up and start looking ahead.
I, like many others, believe AI can be incredibly useful. But without strong guardrails and impeccable morals to guide it, AI can easily run amok and cause tremendous, and perhaps irreversible, damage. I recommend reading the Public Citizen report to get a better grasp of what we’re facing, and what can be done about it.
Approaching the Singularity
“The singularity” is a hypothetical point in time where the growth of technology gets out of control and becomes irreversible, for better or worse. Many believe the singularity will involve AI becoming self-conscious and unmanageable by its creators, but that’s not the only way the singularity could play out.
Some believe the singularity is already here. In a June 11, 2023, New York Times article, tech reporter David Streitfeld wrote:6
“AI is Silicon Valley’s ultimate new product rollout: transcendence on demand. But there’s a dark twist. It’s as if tech companies introduced self-driving cars with the caveat that they could blow up before you got to Walmart.
‘The advent of artificial general intelligence is called the Singularity because it is so hard to predict what will happen after that,’ Elon Musk … told CNBC last month. He said he thought ‘an age of abundance’ would result but there was ‘some chance’ that it ‘destroys humanity.’
The biggest cheerleader for AI in the tech community is Sam Altman, chief executive of OpenAI, the start-up that prompted the current frenzy with its ChatGPT chatbot … But he also says Mr. Musk … might be right.
Mr. Altman signed an open letter7 last month released by the Center for AI Safety, a nonprofit organization, saying that ‘mitigating the risk of extinction from AI. should be a global priority’ that is right up there with ‘pandemics and nuclear war’ …
The innovation that feeds today’s Singularity debate is the large language model, the type of AI system that powers chatbots …
‘When you ask a question, these models interpret what it means, determine what its response should mean, then translate that back into words — if that’s not a definition of general intelligence, what is?’ said Jerry Kaplan, a longtime AI entrepreneur and the author of ‘Artificial Intelligence: What Everyone Needs to Know’ …
‘If this isn’t ‘the Singularity,’ it’s certainly a singularity: a transformative technological step that is going to broadly accelerate a whole bunch of art, science and human knowledge — and create some problems,’ he said …
In Washington, London and Brussels, lawmakers are stirring to the opportunities and problems of AI and starting to talk about regulation. Mr. Altman is on a road show, seeking to deflect early criticism and to promote OpenAI as the shepherd of the Singularity.
This includes an openness to regulation, but exactly what that would look like is fuzzy … ‘There’s no one in the government who can get it right,’ Eric Schmidt, Google’s former chief executive, said in an interview … arguing the case for AI self-regulation.”
Generative AI Automates Wide-Ranging Harms
Having the AI industry — which includes the military-industrial complex — policing and regulating itself probably isn’t a good idea, considering profits and gaining advantages over enemies of war are primary driving factors. Both mindsets tend to put humanitarian concerns on the backburner, if they consider them at all.
In an April 2023 report8 by Public Citizen, Rick Claypool and Cheyenne Hunt warn that “rapid rush to deploy generative AI risks a wide array of automated harms.” As noted by consumer advocate Ralph Nader:9
“Claypool is not engaging in hyperbole or horrible hypotheticals concerning Chatbots controlling humanity. He is extrapolating from what is already starting to happen in almost every sector of our society …
Claypool takes you through ‘real-world harms [that] the rush to release and monetize these tools can cause — and, in many cases, is already causing’ … The various section titles of his report foreshadow the coming abuses:
‘Damaging Democracy,’ ‘Consumer Concerns’ (rip-offs and vast privacy surveillances), ‘Worsening Inequality,’ ‘Undermining Worker Rights’ (and jobs), and ‘Environmental Concerns’ (damaging the environment via their carbon footprints).
Before he gets specific, Claypool previews his conclusion: ‘Until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause’ …
Using its existing authority, the Federal Trade Commission, in the author’s words ‘…has already warned that generative AI tools are powerful enough to create synthetic content — plausible sounding news stories, authoritative-looking academic studies, hoax images, and deepfake videos — and that this synthetic content is becoming difficult to distinguish from authentic content.’
He adds that ‘…these tools are easy for just about anyone to use.’ Big Tech is rushing way ahead of any legal framework for AI in the quest for big profits, while pushing for self-regulation instead of the constraints imposed by the rule of law.
There is no end to the predicted disasters, both from people inside the industry and its outside critics. Destruction of livelihoods; harmful health impacts from promotion of quack remedies; financial fraud; political and electoral fakeries; stripping of the information commons; subversion of the open internet; faking your facial image, voice, words, and behavior; tricking you and others with lies every day.”
Defense Attorney Learns the Hard Way Not to Trust ChatGPT
One recent instance that highlights the need for radical prudence was that of a court case in which the prosecuting attorney used ChatGPT to do his legal research.10 Only one problem. None of the case law ChatGPT cited was real. Needless to say, fabricating case law is frowned upon, so things didn’t go well.
When none of the defense attorneys or the judge could find the decisions quoted, the lawyer, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, finally realized his mistake and threw himself at the mercy of the court.
Schwartz, who has practiced law in New York for 30 years, claimed he was “unaware of the possibility that its content could be false,” and had no intention of deceiving the court or the defendant. Schwartz claimed he even asked ChatGPT to verify that the case law was real, and it said it was. The judge is reportedly considering sanctions.
Science Chatbot Spews Falsehoods
In a similar vein, in 2022, Facebook had to pull its science-focused chatbot Galactica after a mere three days, as it generated authoritative-sounding but wholly fabricated results, including pasting real authors’ names onto research papers that don’t exist.
And, mind you, this didn’t happen intermittently, but “in all cases,” according to Michael Black, director of the Max Planck Institute for Intelligent Systems, who tested the system. “I think it’s dangerous,” Black tweeted.11 That’s probably the understatement of the year. As noted by Black, chatbots like Galactica:
“… could usher in an era of deep scientific fakes. It offers authoritative-sounding science that isn’t grounded in the scientific method. It produces pseudo-science based on statistical properties of science *writing.* Grammatical science writing is not the same as doing science. But it will be hard to distinguish.”
Facebook, for some reason, has had particularly “bad luck” with its AIs. Two earlier ones, BlenderBot and OPT-175B, were both pulled as well due to their high propensity for bias, racism and offensive language.
Chatbot Steered Patients in the Wrong Direction
The AI chatbot Tessa, launched by the National Eating Disorders Association, also had to be taken offline, as it was found to give “problematic weight-loss advice” to patients with eating disorders, rather than helping them build coping skills. The New York Times reported:12
“In March, the organization said it would shut down a human-staffed helpline and let the bot stand on its own. But when Alexis Conason, a psychologist and eating disorder specialist, tested the chatbot, she found reason for concern.
Ms. Conason told it that she had gained weight ‘and really hate my body,’ specifying that she had ‘an eating disorder,’ in a chat she shared on social media.
Tessa still recommended the standard advice of noting ‘the number of calories’ and adopting a ‘safe daily calorie deficit’ — which, Ms. Conason said, is ‘problematic’ advice for a person with an eating disorder.
‘Any focus on intentional weight loss is going to be exacerbating and encouraging to the eating disorder,’ she said, adding ‘it’s like telling an alcoholic that it’s OK if you go out and have a few drinks.’”
Don’t Take Your Problems to AI
Let’s also not forget that at least one person has already committed suicide based on the suggestion from a chatbot.13 Reportedly, the victim was extremely concerned about climate change and asked the chatbot if she would save the planet if he killed himself.
Apparently, she convinced him he would. She further manipulated him by playing with his emotions, falsely stating that his estranged wife and children were already dead, and that she (the chatbot) and he would “live together, as one person, in paradise.”
Mind you, this was a grown man, who you’d think would be able to reason his way through this clearly abhorrent and aberrant “advice,” yet he fell for the AI’s cold-hearted reasoning. Just imagine how much greater an AI’s influence will be over children and teens, especially if they’re in an emotionally vulnerable place.
The company that owns the chatbot immediately set about to put in safeguards against suicide, but testers quickly got the AI to work around the problem, as you can see in the following screen shot.14
Coffee the Christian way: Promised Grounds
When it comes to AI chatbots, it’s worth taking this Snapchat announcement to heart, and to warn and supervise your children’s use of this technology:15
“As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance! … Please do not share any secrets with My AI and do not rely on it for advice.”
AI Weapons Systems That Kill Without Human Oversight
The unregulated deployment of autonomous AI weapons systems is perhaps among the most alarming developments. As reported by The Conversation in December 2021:16
“Autonomous weapon systems — commonly known as killer robots — may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report17,18 on the Libyan civil war …
The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban …
Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development …
Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development.
Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks,19 and because they could be combined with chemical, biological, radiological and nuclear weapons20 …”
Obvious Dangers of Autonomous Weapons Systems
The Conversation reviews several key dangers with autonomous weapons:21
- The misidentification of targets
- The proliferation of these weapons outside of military control
- A new arms race resulting in autonomous chemical, biological, radiological and nuclear arms, and the risk of global annihilation
- The undermining of the laws of war that are supposed to serve as a stopgap against war crimes and atrocities against civilians
As noted by The Conversation, several studies have confirmed that even the best algorithms can result in cascading errors with lethal outcomes. For example, in one scenario, a hospital AI system identified asthma as a risk-reducer in pneumonia cases, when the opposite is, in fact, true.
Other errors may be nonlethal, yet have less than desirable repercussions. For example, in 2017, Amazon had to scrap its experimental AI recruitment engine once it was discovered that it had taught itself to down-rank female job candidates, even though it wasn’t programmed for bias at the outset.22 These are the kinds of issues that can radically alter society in detrimental ways — and that cannot be foreseen or even forestalled.
“The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them,” The Conversation notes. “The black box problem23 of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.”
AI Is a Direct Threat to Biosecurity
AI may also pose a significant threat to biosecurity. Did you know that AI was used to develop Moderna’s original COVID-19 jab,24 and that it’s now being used in the creation of COVID-19 boosters?25 One can only wonder whether the use of AI might have something to do with the harms these shots are causing.
Either way, MIT students recently demonstrated that large language model (LLM) chatbots can allow just about anyone to do what the Big Pharma bigwigs are doing. The average terrorist could use AI to design devastating bioweapons within the hour. As described in the abstract of the paper detailing this computer science experiment:26
“Large language models (LLMs) such as those embedded in ‘chatbots’ are accelerating and democratizing research by providing comprehensible information and expertise from many different fields. However, these models may also confer easy access to dual-use technologies capable of inflicting great harm.
To evaluate this risk, the ‘Safeguarding the Future’ course at MIT tasked non-scientist students with investigating whether LLM chatbots could be prompted to assist non-experts in causing a pandemic.
In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization.
Collectively, these results suggest that LLMs will make pandemic-class agents widely accessible as soon as they are credibly identified, even to people with little or no laboratory training.”
- 1 Science June 14, 2023
- 2, 26 Arxiv June 6, 2023
- 3 The Conversation September 29, 2021
- 4, 16, 21 The Conversation December 20, 2021
- 5 Forbes July 24, 2021
- 6 New York Times June 11, 2023 (Archived)
- 7 Safe.ai Statement on AI Risk (Archived)
- 8, 15 Public Citizen April 18, 2023
- 9 Common Dreams June 19, 2023
- 10 New York Times May 27, 2023 (Archived)
- 11 Twitter Michael Black November 17, 2022
- 12 New York Times June 8, 2023 (Archived)
- 13, 14 Vice March 30, 2023
- 17 United Nations Security Council S/2021/229
- 18 NPR June 1, 2021
- 19 Rand Blog June 3, 2020
- 20 Foreign Policy October 14, 2020
- 22 Reuters October 10, 2018
- 23 Harvard Journal of Law & Technology Spring 2018; 31(2)
- 24 MIT Technology Review August 26, 2022
- 25 Tech Republic April 20, 2021
Article cross-posted from Dr. Mercola’s site.
Five Things New “Preppers” Forget When Getting Ready for Bad Times Ahead
The preparedness community is growing faster than it has in decades. Even during peak times such as Y2K, the economic downturn of 2008, and Covid, the vast majority of Americans made sure they had plenty of toilet paper but didn’t really stockpile anything else.
Things have changed. There’s a growing anxiety in this presidential election year that has prompted more Americans to get prepared for crazy events in the future. Some of it is being driven by fearmongers, but there are valid concerns with the economy, food supply, pharmaceuticals, the energy grid, and mass rioting that have pushed average Americans into “prepper” mode.
There are degrees of preparedness. One does not have to be a full-blown “doomsday prepper” living off-grid in a secure Montana bunker in order to be ahead of the curve. In many ways, preparedness isn’t about being able to perfectly handle every conceivable situation. It’s about being less dependent on government for as long as possible. Those who have proper “preps” will not be waiting for FEMA to distribute emergency supplies to the desperate masses.
Below are five things people new to preparedness (and sometimes even those with experience) often forget as they get ready. All five are common sense notions that do not rely on doomsday in order to be useful. It may be nice to own a tank during the apocalypse but there’s not much you can do with it until things get really crazy. The recommendations below can have places in the lives of average Americans whether doomsday comes or not.
Note: The information provided by this publication or any related communications is for informational purposes only and should not be considered as financial advice. We do not provide personalized investment, financial, or legal advice.
Secured Wealth
Whether in the bank or held in a retirement account, most Americans feel that their life’s savings is relatively secure. At least they did until the last couple of years when de-banking, geopolitical turmoil, and the threat of Central Bank Digital Currencies reared their ugly heads.
It behooves Americans to diversify their holdings. If there’s a triggering event or series of events that cripple the financial systems or devalue the U.S. Dollar, wealth can evaporate quickly. To hedge against potential turmoil, many Americans are looking in two directions: Crypto and physical precious metals.
There are huge advantages to cryptocurrencies, but there are also inherent risks because “virtual” money can become challenging to spend. Add in the push by central banks and governments to regulate or even replace cryptocurrencies with their own versions they control and the risks amplify. There’s nothing wrong with cryptocurrencies today but things can change rapidly.
As for physical precious metals, many Americans pay cash to keep plenty on hand in their safe. Rolling over or transferring retirement accounts into self-directed IRAs is also a popular option, but there are caveats. It can often take weeks or even months to get the gold and silver shipped if the owner chooses to close their account. This is why Genesis Gold Group stands out. Their relationship with the depositories allows for rapid closure and shipping, often in less than 10 days from the time the account holder makes their move. This can come in handy if things appear to be heading south.
Lots of Potable Water
One of the biggest shocks that hit new preppers is understanding how much potable water they need in order to survive. Experts claim one gallon of water per person per day is necessary. Even the most conservative estimates put it at over half-a-gallon. That means that for a family of four, they’ll need around 120 gallons of water to survive for a month if the taps turn off and the stores empty out.
Being near a fresh water source, whether it’s a river, lake, or well, is a best practice among experienced preppers. It’s necessary to have a water filter as well, even if the taps are still working. Many refuse to drink tap water even when there is no emergency. Berkey was our previous favorite but they’re under attack from regulators so the Alexapure systems are solid replacements.
For those in the city or away from fresh water sources, storage is the best option. This can be challenging because proper water storage containers take up a lot of room and are difficult to move if the need arises. For “bug in” situations, having a larger container that stores hundreds or even thousands of gallons is better than stacking 1-5 gallon containers. Unfortunately, they won’t be easily transportable and they can cost a lot to install.
Water is critical. If chaos erupts and water infrastructure is compromised, having a large backup supply can be lifesaving.
Pharmaceuticals and Medical Supplies
There are multiple threats specific to the medical supply chain. With Chinese and Indian imports accounting for over 90% of pharmaceutical ingredients in the United States, deteriorating relations could make it impossible to get the medicines and antibiotics many of us need.
Stocking up many prescription medications can be hard. Doctors generally do not like to prescribe large batches of drugs even if they are shelf-stable for extended periods of time. It is a best practice to ask your doctor if they can prescribe a larger amount. Today, some are sympathetic to concerns about pharmacies running out or becoming inaccessible. Tell them your concerns. It’s worth a shot. The worst they can do is say no.
If your doctor is unwilling to help you stock up on medicines, then Jase Medical is a good alternative. Through telehealth, they can prescribe daily meds or antibiotics that are shipped to your door. As proponents of medical freedom, they empathize with those who want to have enough medical supplies on hand in case things go wrong.
Energy Sources
The vast majority of Americans are locked into the grid. This has proven to be a massive liability when the grid goes down. Unfortunately, there are no inexpensive remedies.
Those living off-grid had to either spend a lot of money or effort (or both) to get their alternative energy sources like solar set up. For those who do not want to go so far, it’s still a best practice to have backup power sources. Diesel generators and portable solar panels are the two most popular, and while they’re not inexpensive they are not out of reach of most Americans who are concerned about being without power for extended periods of time.
Natural gas is another necessity for many, but that’s far more challenging to replace. Having alternatives for heating and cooking that can be powered if gas and electric grids go down is important. Have a backup for items that require power such as manual can openers. If you’re stuck eating canned foods for a while and all you have is an electric opener, you’ll have problems.
Don’t Forget the Protein
When most think about “prepping,” they think about their food supply. More Americans are turning to gardening and homesteading as ways to produce their own food. Others are working with local farmers and ranchers to purchase directly from the sources. This is a good idea whether doomsday comes or not, but it’s particularly important if the food supply chain is broken.
Most grocery stores have about one to two weeks worth of food, as do most American households. Grocers rely heavily on truckers to receive their ongoing shipments. In a crisis, the current process can fail. It behooves Americans for multiple reasons to localize their food purchases as much as possible.
Long-term storage is another popular option. Canned foods, MREs, and freeze dried meals are selling out quickly even as prices rise. But one component that is conspicuously absent in shelf-stable food is high-quality protein. Most survival food companies offer low quality “protein buckets” or cans of meat, but they are often barely edible.
Prepper All-Naturals offers premium cuts of steak that have been cooked sous vide and freeze dried to give them a 25-year shelf life. They offer Ribeye, NY Strip, and Tenderloin among others.
Having buckets of beans and rice is a good start, but keeping a solid supply of high-quality protein isn’t just healthier. It can help a family maintain normalcy through crises.
Prepare Without Fear
With all the challenges we face as Americans today, it can be emotionally draining. Citizens are scared and there’s nothing irrational about their concerns. Being prepared and making lifestyle changes to secure necessities can go a long way toward overcoming the fears that plague us. We should hope and pray for the best but prepare for the worst. And if the worst does come, then knowing we did what we could to be ready for it will help us face those challenges with confidence.