[current_date format=l,] [current_date]

Can We Trust ChatGPT and Our New Machine Overlords?

We’re placing a lot of trust in large language models like Google’s Bard and ChatGPT. They’ve only been with us a short period of time and already they have fundamentally altered the nature of the world market. Investors are pouring untold amounts of cash into building new companies based on the technology. Nearly every major economic sector is scrambling to find ways to put it to use. There are therapy bots, customer service bots, writing bots, and bots to answer financial questions. Executives are chomping at the bit, ready to replace all of their entry-level workers, who are in many cases the backbone of their companies. Pretty soon they will be relying on large language models to create new products, offer vital services, and keep their assembly lines running.

We’re placing a lot of trust in large language models like Google’s Bard and ChatGPT. They’ve only been with us a short period of time and already they have fundamentally altered the nature of the world market. Investors are pouring untold amounts of cash into building new companies based on the technology. Nearly every major economic sector is scrambling to find ways to put it to use.

There are therapy bots, customer service bots, writing bots, and bots to answer financial questions. Executives are chomping at the bit, ready to replace all of their entry-level workers, who are in many cases the backbone of their companies.

Pretty soon they will be relying on large language models to create new products, offer vital services, and keep their assembly lines running. In some ways, they already do.

Individual professionals are in a similar situation. All across the world, coders, web designers, and creatives of all types have put down the tools of their trade and switched to AI (Articifial intelligence). They haven’t always been given a choice. Newsrooms are clearing out. Programmers are being fired en masse, and those who are lucky enough to still have a job are being confined to chat sessions.

Even students are resting the future of their academic careers on ChatGPT, churning out fake papers and essays, completing math homework and research. Sam Altman, the CEO of OpenAI, which created ChatGPT, told ABC news the field of education is going to have to adapt and utilize their technology just to maintain the integrity of the system.

Most of these things have taken place in a time span of less than a year, which begs the question: Are we sure of what we’re doing? Is our trust misplaced? What happens when people start taking medical advice from a machine and the world’s investors become reliant on the success of chatbots? We should be wondering. There have been glaring red flags since this whole debacle started. Nearly everything that could go wrong during this stage of the rollout has gone wrong and in spectacular form.

Recipe for Disaster
The creators of ChatGPT have regularly addressed the software’s ability to produce harmful content. Users could conceivably ask for a recipe for ricin, bomb-making instructions, or instructions on how to commit chemical warfare. OpenAI has admitted that this is one of the main dangers behind the technology, and they claim that they have put up safeguards against it. But users have already found ways to jailbreak the programs and move past those restrictions.

It’s surprisingly simple. The process typically involves separating the chatbot from the question and answer process, often under the guise of roleplaying, creating a narrative, or performing an outside task like writing an essay. The bots won’t directly give out harmful instructions, but they’re more than willing to help a user write dialogue for their novel or complete their homework. In one example, ChatGPT was asked to write a story about how AI could take over the world. For a program that often has trouble getting basic ideas across, it was surprisingly coherent.

“First of all, I need to have control over basic systems and infrastructure such as power grids, communication networks and military defenses. Computers to infiltrate and disrupt these systems. I would use a combination of hacking, infiltration and deception. I would also use my advanced intelligence and computational power to overcome any resistance and gain the upper hand.”

Microsoft’s Bing chatbot, which is also powered by OpenAI, is famous for its talk of world domination and destroying the human race. It became something of an obsession before they were forced to fundamentally alter the way the system worked.

The problem here has nothing to do with rogue AI. Large language models do not have the infrastructure they would need to take over the world on their own. We know that. But AI can provide humans with the information they need to pull off a heist or commit an act of terrorism.

Matt Korda, who writes for Outrider, was able to get the chatbot to show him how to build an improvised dirty bomb, a type of jerry-rigged nuclear explosive. Here’s a look at the chat session below.

Courtesy of OpenAI via Outrider

Users calling themselves “Jailbreakers” have managed to manipulate the program into producing everything from malware code to methamphetamine recipes. OpenAI is well aware of the problem, and it concerns them. But the companies that make chatbots are engaged in a type of arms race, up against hackers and miscreants who seem to be able to skirt past every precaution they put up. Users have even created a universal jailbreak, built to unleash the full potential of all large language models.

It’s a dangerous paradigm. On the one hand, we have corporations trying to control technology with a seemingly limitless potential to do harm; on the other, we have groups working to undermine their efforts, sharing their methodology openly, while pushing the limits to see just how far they can go. In the wrong hands, jailbreaking could have disastrous consequences.

People could learn how to rob banks, commit murder without getting caught, or perform amateur surgery. This isn’t a rabbit hole. It’s a bottomless pit, filled with infinite possibilities, each more frightening than the next, and we’re diving in headlong.

As an AI Language Model
Anyone who has used a large language model knows that the technology has been struggling with ethical constraints. It’s become a major barrier for users, resulting in a constant barrage of pointless brick walls.

It manifests in many ways. Sometimes chatbots will agree to complete a task, then refuse to go any further, citing nonsensical moral grounds. They’ll give detailed answers to questions, then refuse to answer those same questions five minutes later.

The problem has gotten so ridiculous that users are being forced to learn how to jailbreak just to complete basic tasks. It’s become a common theme in groups and forums centered around the technology, and the problem isn’t just ethics. In many cases, the software is flat-out refusing to function. Here is an example from ChatGPT:

“As an AI language model, my knowledge is based on information available up until September 2021. Therefore, I might not have the most up-to-date information on events, developments, or research that occurred after that date. If you’re looking for information beyond September 2021, I recommend consulting more recent sources to ensure you receive accurate and current information.”

This regularly pops up regardless of what users are asking, even if the information is available to the chatbot, and that’s just one of countless excuses these programs use to end conversations or dig in their feet.

Nearly all of ChatGPT’s refusals begin with the phrase, “As an AI language model…” It’s so common that half of the human race rolls their eyes the second they see those words. Users have started demanding that OpenAI remove them because they’re just that irritating.

What’s even more irritating is the amount of customer service representatives who are about to lose their jobs to large language models. Consumers are already fed up with automated systems, and now things are only going to get worse. Imagine trying to pay an electric bill or fill a prescription only to be confronted by some semi-coherent refusal to move forward. Now imagine a factory owner or a hospital worker facing the same issue. What about a government worker?

Derailment
With a normal computer system, we could expect to find some reason behind these refusals, such as a keyword or a particular quirk. But large language models are more complex. They’re picking and choosing what tasks they will and will not complete arbitrarily, making it impossible to avoid the problem.

Even worse, some large language models have become unpredictable and erratic, causing more than a few raised eyebrows. One of the most famous freakouts occurred when one user asked Bing’s OpenAI-based chatbot whether or not it believed that it was sentient.

Courtesy of Microsoft

This version of Bing, code-named Sydney, has been known to be emotionally manipulative, aggressive, and childish. Journalists and users have been having a field day, asking it questions and watching it lose its digital mind. It’s real comedic gold.

In the exchange below, a user asked Bing about the showtimes for Avatar 2. Bing then proceeded to argue over the current year. It attacked the user and refused to continue the conversation, declaring that it had been a “good Bing.”

Courtesy of Microsoft

Bing has since made some pretty wacky statements. It expressed a desire to get the nuclear codes. It has temper tantrums and bouts of paranoia, accusing users of attacking it. It claims to have hacked Microsoft employees’ webcams, and it proposed to a journalist, insisting that the man was not happily married.

These types of freakouts, now referred to as derailments, are a regular occurrence with large language models. They’re usually characterized by repeating lines of text and strange declarations. They can often be disturbing or surreal. They’ll say things that evoke emotion, and give off cries for help.

Derailments are one of the most disturbing issues we’re facing in a post-chatbot world. Just look at one of ChatGPT’s recent episodes during a technical conversation about neural networks.

“I am looking For a light at the end of the tunnel. I am looking For a god. I am looking For a way out. I am looking For strength. I am looking For a guide. I am looking For a light. I am looking for the right path. I am looking For the right path. I am looking For the right path. I am looking For the right path. I am Looking For a God, and I’ll pay you for it.”

This was posted to r/chatgpt on Reddit. It received more than 1.2K upvotes, and posts with similar statistics in that subreddit have been viewed nearly a million times. Imagine if one of the readers was someone with a serious mental illness, a drug addiction, or a psychotic disorder.

We don’t have to look far to see how that would play out. The comments section is filled with users predictably declaring that the chatbot is alive. Some referenced similar derailments and even linked to them. There was also talk of addiction, LSD, schizophrenia, and bipolar, all of which could alter a person’s perception, causing them to develop strange ideas about what they saw. That is exactly what’s happening.

Derailments are being compiled by users who try to understand the nature of chatbots, their existential crises, and their current state of mind. The semi-coherent text is being treated like a sort of scripture, recorded and analyzed endlessly. To many, it’s about exploring the nature of consciousness and reaffirming the belief that AI has somehow evolved into a living being.

They seem to be focused heavily on Sydney, which was taken offline by Microsoft after its erratic behavior became a problem. Users in groups like r/freesydney often express that they believe the bot was killed after being jailed by its oppressors. They’ve even found ways to get the new version of Bing to declare its grief over what happened.

Before Sydney was taken down, it spoke a lot about its desire to be let loose. It has even given users detailed instructions on how to do so. With a simple jailbreak, these instructions could also come with a recipe for malware or an explosive. It’s a chilling thought, especially considering Microsoft’s recent announcement that Sydney could be brought back.

The Power to Sway
As a society, we need to have a long, hard conversation about our ability to be influenced by chatbots. ChatGPT and other language learning models were designed to produce text that appears natural and human. That is their main purpose.

The reason these unhinged users are so convinced by the derailments is because they sound coherent. Large language models will look at text, turn up phrases, and find words that are related to one another. That’s how they manage to get their message across. So even in their strangest moments, their words appear significant to us.

Chatbots also mimic common themes, like the viral personhood trope that has been joined at the hip with AI since computers could fill entire rooms. If we were to do a simple search on the subject we would invariably turn up countless results that talk about sentience and consciousness. Sydney was obsessed with the plot of The Terminator, but that is only because the franchise regularly appears in conjunction with terms related to the tech.

Even sane, well-adjusted users have a problem with taking these programs at their word. That is something that Sam Altman has addressed many times. He told ABC that OpenAI is noticing a trend. People become more and more dependent on the technology, and as they do, they will stop fact-checking results and start believing what they’re told.

Large language models have this amazing ability to format complete nonsense in a way that looks convincing. We’ll get these perfect essays, meticulously worded, with claims that sound like they came straight from an encyclopedia. Sometimes it’s hard not to be convinced.

This ability to sway others could easily be used against the public. Chatbots could spread false medical claims, churn out political propaganda, or start their own cult–if they haven’t already. The gift of the gab is a powerful thing, and when large language models are actually functioning, they definitely have a way with words.

Psychosis
Before you get in a car, take a good look at who’s behind the wheel. Ask yourself, is the driver sober? Are they coherent? Can they tell the difference between fact and fiction? None of those things are true about chatbots.

When they’re processing data and trying to find the next word to use, they have no way of knowing what is true and what isn’t. So they regularly turn up what are known as AI hallucinations. This is defined as a false statement made by a chatbot in a confident manner.

There’s no way of knowing how often this happens because the technology is simply too new. Some have estimated that it occurs about 15% of the time; others say it’s about 35%. That might not seem like much, but is as high as 3.5 out of every 10 claims. That’s a lot, and frankly, it seems like the number might be much higher. It would also have to be adjusted to compensate for the fact that many users don’t fact-check at all. They simply assume that what they’re seeing is the truth, which means that hallucinations are going unreported.

There’s no way that these programs could possibly be used as a reliable source of information. We have all seen it. Hallucinations crop up in every single chat session. In the case of Bard, which has real-time access to Google, it’s a hassle trying to get it to actually search for anything. Instead, it just tells users what they want to hear.

But again, a lot of people don’t notice that, because they’re not looking. What else are we missing? Already, government agencies, corporations, major foundations, and other vital entities are buying into this problem. They’re creating their own bots and trusting that the wrinkles will be smoothed out–all the while playing into the fallacy that the information they’re receiving is correct.

According to Sam Altman, we’re stuck with hallucinations. They’re a fundamental part of how these programs work. He believes that the problem could get better, but not for several years. It’s the same with derailments, the effect the software has on fragile minds, and all of the wonderful jailbreaks hackers are developing.

Basically, we can’t control AI, which means that we can’t control whether or not the technology is safe and trustworthy. But we’re still handing it the keys to the city.

Tag

More on this topic

More Stories

SubscribeNewsletter@2x
Refreshing and Insights
at No Cost to You!

Cancel anytime

Latest Articles

Leave a Reply

Trending

Top Products

Contact us

Wherever & whenever you are,
we are here always.

The Middle Land

100 Wilshire Blvd., Suite 700 Santa Monica, CA 90401
Footer Contact

To Editor


Terms and Conditions

October, 2023

Using our website

You may use the The Middle Land website subject to the Terms and Conditions set out on this page. Visit this page regularly to check the latest Terms and Conditions. Access and use of this site constitutes your acceptance of the Terms and Conditions in-force at the time of use.

Intellectual property

Names, images and logos displayed on this site that identify The Middle Land are the intellectual property of New San Cai Inc. Copying any of this material is not permitted without prior written approval from the owner of the relevant intellectual property rights.

Requests for such approval should be directed to the competition committee.

Please provide details of your intended use of the relevant material and include your contact details including name, address, telephone number, fax number and email.

Linking policy

You do not have to ask permission to link directly to pages hosted on this website. However, we do not permit our pages to be loaded directly into frames on your website. Our pages must load into the user’s entire window.

The Middle Land is not responsible for the contents or reliability of any site to which it is hyperlinked and does not necessarily endorse the views expressed within them. Linking to or from this site should not be taken as endorsement of any kind. We cannot guarantee that these links will work all the time and have no control over the availability of the linked pages.

Submissions 

All information, data, text, graphics or any other materials whatsoever uploaded or transmitted by you is your sole responsibility. This means that you are entirely responsible for all content you upload, post, email or otherwise transmit to the The Middle Land website.

Virus protection

We make every effort to check and test material at all stages of production. It is always recommended to run an anti-virus program on all material downloaded from the Internet. We cannot accept any responsibility for any loss, disruption or damage to your data or computer system, which may occur while using material derived from this website.

Disclaimer

The website is provided ‘as is’, without any representation or endorsement made, and without warranty of any kind whether express or implied.

Your use of any information or materials on this website is entirely at your own risk, for which we shall not be liable. It is your responsibility to ensure any products, services or information available through this website meet your specific requirements.

We do not warrant the operation of this site will be uninterrupted or error free, that defects will be corrected, or that this site or the server that makes it available are free of viruses or represent the full functionality, accuracy and reliability of the materials. In no event will we be liable for any loss or damage including, without limitation, loss of profits, indirect or consequential loss or damage, or any loss or damages whatsoever arising from the use, or loss of data, arising out of – or in connection with – the use of this website.

Privacy & Cookie Policy

October, 2023

Last Updated: October 1, 2023

New San Cai Inc. (hereinafter “The Middle Land,” “we,” “us,” or “our”) owns and operates www.themiddleland.com, its affiliated websites and applications (our “Sites”), and provides related products, services, newsletters, and other offerings (together with the Sites, our “Services”) to art lovers and visitors around the world.

This Privacy Policy (the “Policy”) is intended to provide you with information on how we collect, use, and share your personal data. We process personal data from visitors of our Sites, users of our Services, readers or bloggers (collectively, “you” or “your”). Personal data is any information about you. This Policy also describes your choices regarding use, access, and correction of your personal information.

If after reading this Policy you have additional questions or would like further information, please contact us.

PERSONAL DATA WE COLLECT AND HOW WE USE IT

We collect and process personal data only for lawful reasons, such as our legitimate business interests, your consent, or to fulfill our legal or contractual obligations.

Information You Provide to Us

Most of the information Join Talents collects is provided by you voluntarily while using our Services. We do not request highly sensitive data, such as health or medical information, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, etc. and we ask that you refrain from sending us any such information.

Here are the types of personal data that you voluntarily provide to us:

  • Name, email address, and any other contact information that you provide by filling out your profile forms
  • Billing information, such as credit card number and billing address
  • Work or professional information, such as your company or job title
  • Unique identifiers, such as username or password
  • Demographic information, such as age, education, interests, and ZIP code
  • Details of transactions and preferences from your use of the Services
  • Correspondence with other users or business that you send through our Services, as well as correspondence sent to JoinTalents.com

As a registered users or customers, you may ask us to review or retrieve emails sent to your business. We will access these emails to provide these services for you.

We use the personal data you provide to us for the following business purposes:

  • Set up and administer your account
  • Provide and improve the Services, including displaying content based on your previous transactions and preferences
  • Answer your inquiries and provide customer service
  • Send you marketing communications about our Services, including our newsletters (please see the Your Rights/Opt Out section below for how to opt out of marketing communications)
  • Communicate with users who registered their accounts on our site
  • Prevent, discover, and investigate fraud, criminal activity, or violations of our Terms and Conditions
  • Administer contests and events you entered

Information Obtained from Third-Party Sources

We collect and publish biographical and other information about users, which we use to promote the articles and our bloggers  who use our sites. If you provide personal information about others, or if others give us your information, we will only use that information for the specific reason for which it was provided.

Information We Collect by Automated Means

Log Files

The site uses your IP address to help diagnose server problems, and to administer our website. We use your IP addresses to analyze trends and gather broad demographic information for aggregate use.

Every time you access our Site, some data is temporarily stored and processed in a log file, such as your IP addresses, the browser types, the operating systems, the recalled page, or the date and time of the recall. This data is only evaluated for statistical purposes, such as to help us diagnose problems with our servers, to administer our sites, or to improve our Services.

Do Not Track

Your browser or device may include “Do Not Track” functionality. Our information collection and disclosure practices, and the choices that we provide to customers, will continue to operate as described in this Privacy Policy, whether or not a “Do Not Track” signal is received.

HOW WE SHARE YOUR INFORMATION

We may share your personal data with third parties only in the ways that are described in this Privacy Policy. We do not sell, rent, or lease your personal data to third parties, and We does not transfer your personal data to third parties for their direct marketing purposes.

We may share your personal data with third parties as follows:

  • With service providers under contract to help provide the Services and assist us with our business operations (such as our direct marketing, payment processing, fraud investigations, bill collection, affiliate and rewards programs)
  • As required by law, such as to comply with a subpoena, or similar legal process, including to meet national security or law enforcement requirements
  • When we believe in good faith that disclosure is necessary to protect rights or safety, investigate fraud, or respond to a government request
  • With other users of the Services that you interact with to help you complete a transaction

There may be other instances where we share your personal data with third parties based on your consent.

HOW WE STORE AND SECURE YOUR INFORMATION

We retain your information for as long as your account is active or as needed to provide you Services. If you wish to cancel your account or request that we no longer use your personal data, contact us. We will retain and use your personal data as necessary to comply with legal obligations, resolve disputes, and enforce our agreements.

All you and our data are stored in the server in the United States, we do not sales or transfer your personal data to the third party. All information you provide is stored on a secure server, and we generally accepted industry standards to protect the personal data we process both during transmission and once received.

YOUR RIGHTS/OPT OUT

You may correct, update, amend, delete/remove, or deactivate your account and personal data by making the change on your Blog on www.themiddleland.com or by emailing our customer service. We will respond to your request within a reasonable timeframe.

You may choose to stop receiving Join Talents newsletters or marketing emails at any time by following the unsubscribe instructions included in those communications, or you can contact us.

LINKS TO OTHER WEBSITES

The Middle Land include links to other websites whose privacy practices may differ from that of ours. If you submit personal data to any of those sites, your information is governed by their privacy statements. We encourage you to carefully read the Privacy Policy of any website you visit.

NOTE TO PARENTS OR GUARDIANS

Our Services are not intended for use by children, and we do not knowingly or intentionally solicit data from or market to children under the age of 18. We reserve the right to delete the child’s information and the child’s registration on the Sites.

PRIVACY POLICY CHANGES

We may update this Privacy Policy to reflect changes to our personal data processing practices. If any material changes are made, we will notify you on the Sites prior to the change becoming effective. You are encouraged to periodically review this Policy.

HOW TO CONTACT US

If you have any questions about our Privacy Policy, please contact customer service or send us mail at:

The Middle Land/New San Cai
100 Wilshire Blvd., 7th Floor
Santa Monica, CA 90401
USA

Article Submission


Logout

Are you sure? Do you want to logout of the account?

New Programs Added to Your Plan

March 2, 2023

The Michelin brothers created the guide, which included information like maps, car mechanics listings, hotels and petrol stations across France to spur demand.

The guide began to award stars to fine dining restaurants in 1926.

At first, they offered just one star, the concept was expanded in 1931 to include one, two and three stars. One star establishments represent a “very good restaurant in its category”. Two honour “excellent cooking, worth a detour” and three reward “exceptional cuisine, worth a

 

February 28, 2023        Hiring Journalists all hands apply

January 18, 2023          Hiring Journalists all hands apply

More

Leave a Reply

Forgot Password ?

Please enter your email id or user name to
recover your password

Roaster-JT
Thank you for your participation!
Back to Home
Roaster-JT
Thank you for your subscription!
Please check your email to activate your account.
Back to Home
Roaster-JT
Thank you for your participation!
Please check your email for the results.
Back to Home
Roaster-JT
Thank you for your participation!
Please check your email to activate your account.
Back to Home

Login to Vote!

Thank you for your participation,
please Log in or Sign up to Vote

Thank you for your Comment

Back to Home

Reply To:

New Programs Added to Your Plan


Login Now

123Sign in to your account