• Tech News
    • Games
    • Pc & Laptop
    • Mobile Tech
    • Ar & Vr
    • Security
  • Startup
    • Fintech
  • Reviews
  • How To
What's Hot

Elementor #32036

January 24, 2025

The Redmi Note 13 is a bigger downgrade compared to the 5G model than you might think

April 18, 2024

Xiaomi Redmi Watch 4 is a budget smartwatch with a premium look and feel

April 16, 2024
Facebook Twitter Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
Facebook Twitter Instagram Pinterest VKontakte
Behind The ScreenBehind The Screen
  • Tech News
    1. Games
    2. Pc & Laptop
    3. Mobile Tech
    4. Ar & Vr
    5. Security
    6. View All

    Bring Elden Ring to the table with the upcoming board game adaptation

    September 19, 2022

    ONI: Road to be the Mightiest Oni reveals its opening movie

    September 19, 2022

    GTA 6 images and footage allegedly leak

    September 19, 2022

    Wild west adventure Card Cowboy turns cards into weird and silly stories

    September 18, 2022

    7 Reasons Why You Should Study PHP Programming Language

    October 19, 2022

    Logitech MX Master 3S and MX Keys Combo for Business Gen 2 Review

    October 9, 2022

    Lenovo ThinkPad X1 Carbon Gen10 Review

    September 18, 2022

    Lenovo IdeaPad 5i Chromebook, 16-inch+120Hz

    September 3, 2022

    It’s 2023 and Spotify Still Can’t Say When AirPlay 2 Support Will Arrive

    April 4, 2023

    YouTube adds very convenient iPhone homescreen widgets

    October 15, 2022

    Google finishes iOS 16 Lock Screen widgets rollout w/ Maps

    October 14, 2022

    Is Apple actually turning iMessage into AIM or is this sketchy redesign rumor for laughs?

    October 14, 2022

    MeetKai launches AI-powered metaverse, starting with a billboard in Times Square

    August 10, 2022

    The DeanBeat: RP1 simulates putting 4,000 people together in a single metaverse plaza

    August 10, 2022

    Improving the customer experience with virtual and augmented reality

    August 10, 2022

    Why the metaverse won’t fall to Clubhouse’s fate

    August 10, 2022

    How Apple privacy changes have forced social media marketing to evolve

    October 16, 2022

    Microsoft Patch Tuesday October Fixed 85 Vulnerabilities – Latest Hacking News

    October 16, 2022

    Decentralization and KYC compliance: Critical concepts in sovereign policy

    October 15, 2022

    What Thoma Bravo’s latest acquisition reveals about identity management

    October 14, 2022

    What is a Service Robot? The vision of an intelligent service application is possible.

    November 7, 2022

    Tom Brady just chucked another Microsoft Surface tablet

    September 18, 2022

    The best AIO coolers for your PC in 2022

    September 18, 2022

    YC’s Michael Seibel clarifies some misconceptions about the accelerator • DailyTech

    September 18, 2022
  • Startup
    • Fintech
  • Reviews
  • How To
Behind The ScreenBehind The Screen
Home»Startup»How to Stop ChatGPT from Going Off the Rails
Startup

How to Stop ChatGPT from Going Off the Rails

December 26, 2022No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
How to Stop ChatGPT from Going Off the Rails
Share
Facebook Twitter LinkedIn Pinterest Email

When Startup asked me to cover this week’s newsletter, my first instinct was to ask ChatGPT—OpenAI’s viral chatbot—to see what it came up with. It’s what I’ve been doing with emails, recipes, and LinkedIn posts all week. Productivity is way down, but sassy limericks about Elon Musk are up 1000 percent.

I asked the bot to write a column about itself in the style of Steven Levy, but the results weren’t great. ChatGPT served up generic commentary about the promise and pitfalls of AI, but didn’t really capture Steven’s voice or say anything new. As I wrote last week, it was fluent, but not entirely convincing. But it did get me thinking: Would I have gotten away with it? And what systems could catch people using AI for things they really shouldn’t, whether that’s work emails or college essays?

To find out, I spoke to Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute who speaks eloquently about how to build transparency and accountability into algorithms. I asked her what that might look like for a system like ChatGPT.

Amit Katwala: ChatGPT can pen everything from classical poetry to bland marketing copy, but one big talking point this week has been whether it could help students cheat. Do you think you could tell if one of your students had used it to write a paper?

Sandra Wachter: This will start to be a cat-and-mouse game. The tech is maybe not yet good enough to fool me as a person who teaches law, but it may be good enough to convince somebody who is not in that area. I wonder if technology will get better over time to where it can trick me too. We might need technical tools to make sure that what we’re seeing is created by a human being, the same way we have tools for deepfakes and detecting edited photos.

See also  Microsoft makes bold claim that Sony pays "blocking rights" to stop games appearing on Game Pass

That seems inherently harder to do for text than it would be for deepfaked imagery, because there are fewer artifacts and telltale signs. Perhaps any reliable solution may need to be built by the company that’s generating the text in the first place. 

You do need to have buy-in from whoever is creating that tool. But if I’m offering services to students I might not be the type of company that is going to submit to that. And there might be a situation where even if you do put watermarks on, they’re removable. Very tech-savvy groups will probably find a way. But there is an actual tech tool [built with OpenAI’s input] that allows you to detect whether output is artificially created. 

What would a version of ChatGPT that had been designed with harm reduction in mind look like? 

A couple of things. First, I would really argue that whoever is creating those tools put watermarks in place. And maybe the EU’s proposed AI Act can help, because it deals with transparency around bots, saying you should always be aware when something isn’t real. But companies might not want to do that, and maybe the watermarks can be removed. So then it’s about fostering research into independent tools that look at AI output. And in education, we have to be more creative about how we assess students and how we write papers: What kind of questions can we ask that are less easily fakeable? It has to be a combination of tech and human oversight that helps us curb the disruption.

See also  The Proper to Journey Out of State for an Abortion Isn’t as Safe as You Might Assume

Source link

ChatGPT rails stop
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Multiple Milestones As New Majority Capital Boosts Entrepreneurship Through Acquisition

September 26, 2023

Getty Images Plunges Into the Generative AI Pool

September 26, 2023

3 Hot Startup Opportunities In Augmented Reality

September 26, 2023

The ChatGPT App Can Now Talk to You—and Look Into Your Life

September 25, 2023
Add A Comment

Comments are closed.

Editors Picks

Black Hat 2021: Lessons from a lawyer

July 4, 2022

Complete Warfare: Warhammer III is getting new Champions of Chaos Legendary Lords DLC

July 21, 2022

GCHQ specialists again scanning of encrypted cellphone messages to struggle youngster abuse

July 23, 2022

Report: 84% of U.S. citizens have experienced social engineering attacks

September 23, 2022

Subscribe to Updates

Get the latest news and Updates from Behind The Scene about Tech, Startup and more.

Top Post

Elementor #32036

The Redmi Note 13 is a bigger downgrade compared to the 5G model than you might think

Xiaomi Redmi Watch 4 is a budget smartwatch with a premium look and feel

Behind The Screen
Facebook Twitter Instagram Pinterest Vimeo YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2025 behindthescreen.uk - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.