• Tech News
    • Games
    • Pc & Laptop
    • Mobile Tech
    • Ar & Vr
    • Security
  • Startup
    • Fintech
  • Reviews
  • How To
What's Hot

Elementor #32036

January 24, 2025

The Redmi Note 13 is a bigger downgrade compared to the 5G model than you might think

April 18, 2024

Xiaomi Redmi Watch 4 is a budget smartwatch with a premium look and feel

April 16, 2024
Facebook Twitter Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
Facebook Twitter Instagram Pinterest VKontakte
Behind The ScreenBehind The Screen
  • Tech News
    1. Games
    2. Pc & Laptop
    3. Mobile Tech
    4. Ar & Vr
    5. Security
    6. View All

    Bring Elden Ring to the table with the upcoming board game adaptation

    September 19, 2022

    ONI: Road to be the Mightiest Oni reveals its opening movie

    September 19, 2022

    GTA 6 images and footage allegedly leak

    September 19, 2022

    Wild west adventure Card Cowboy turns cards into weird and silly stories

    September 18, 2022

    7 Reasons Why You Should Study PHP Programming Language

    October 19, 2022

    Logitech MX Master 3S and MX Keys Combo for Business Gen 2 Review

    October 9, 2022

    Lenovo ThinkPad X1 Carbon Gen10 Review

    September 18, 2022

    Lenovo IdeaPad 5i Chromebook, 16-inch+120Hz

    September 3, 2022

    It’s 2023 and Spotify Still Can’t Say When AirPlay 2 Support Will Arrive

    April 4, 2023

    YouTube adds very convenient iPhone homescreen widgets

    October 15, 2022

    Google finishes iOS 16 Lock Screen widgets rollout w/ Maps

    October 14, 2022

    Is Apple actually turning iMessage into AIM or is this sketchy redesign rumor for laughs?

    October 14, 2022

    MeetKai launches AI-powered metaverse, starting with a billboard in Times Square

    August 10, 2022

    The DeanBeat: RP1 simulates putting 4,000 people together in a single metaverse plaza

    August 10, 2022

    Improving the customer experience with virtual and augmented reality

    August 10, 2022

    Why the metaverse won’t fall to Clubhouse’s fate

    August 10, 2022

    How Apple privacy changes have forced social media marketing to evolve

    October 16, 2022

    Microsoft Patch Tuesday October Fixed 85 Vulnerabilities – Latest Hacking News

    October 16, 2022

    Decentralization and KYC compliance: Critical concepts in sovereign policy

    October 15, 2022

    What Thoma Bravo’s latest acquisition reveals about identity management

    October 14, 2022

    What is a Service Robot? The vision of an intelligent service application is possible.

    November 7, 2022

    Tom Brady just chucked another Microsoft Surface tablet

    September 18, 2022

    The best AIO coolers for your PC in 2022

    September 18, 2022

    YC’s Michael Seibel clarifies some misconceptions about the accelerator • DailyTech

    September 18, 2022
  • Startup
    • Fintech
  • Reviews
  • How To
Behind The ScreenBehind The Screen
Home»Startup»Meet the Humans Trying to Keep Us Safe From AI
Startup

Meet the Humans Trying to Keep Us Safe From AI

June 27, 2023No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Meet the Humans Trying to Keep Us Safe From AI
Share
Facebook Twitter LinkedIn Pinterest Email

A year ago, the idea of holding a meaningful conversation with a computer was the stuff of science fiction. But since OpenAI’s ChatGPT launched last November, life has started to feel more like a techno-thriller with a fast-moving plot. Chatbots and other generative AI tools are beginning to profoundly change how people live and work. But whether this plot turns out to be uplifting or dystopian will depend on who helps write it.

Thankfully, just as artificial intelligence is evolving, so is the cast of people who are building and studying it. This is a more diverse crowd of leaders, researchers, entrepreneurs, and activists than those who laid the foundations of ChatGPT. Although the AI community remains overwhelmingly male, in recent years some researchers and companies have pushed to make it more welcoming to women and other underrepresented groups. And the field now includes many people concerned with more than just making algorithms or making money, thanks to a movement—led largely by women—that considers the ethical and societal implications of the technology. Here are some of the humans shaping this accelerating storyline. —Will Knight

About the Art

“I wanted to use generative AI to capture the potential and unease felt as we explore our relationship with this new technology,” says artist Sam Cannon, who worked alongside four photographers to enhance portraits with AI-crafted backgrounds. “It felt like a conversation—me feeding images and ideas to the AI, and the AI offering its own in return.”


Rumman Chowdhury

PHOTOGRAPH: CHERIL SANCHEZ; AI Art by Sam Cannon

Rumman Chowdhury led Twitter’s ethical AI research until Elon Musk acquired the company and laid off her team. She is the cofounder of Humane Intelligence, a nonprofit that uses crowdsourcing to reveal vulnerabilities in AI systems, designing contests that challenge hackers to induce bad behavior in algorithms. Its first event, scheduled for this summer with support from the White House, will test generative AI systems from companies including Google and OpenAI. Chowdhury says large-scale, public testing is needed because of AI systems’ wide-ranging repercussions: “If the implications of this will affect society writ large, then aren’t the best experts the people in society writ large?” —Khari Johnson


Sarah BirdPhotograph: Annie Marie Musselman; AI art by Sam Cannon

Sarah Bird’s job at Microsoft is to keep the generative AI that the company is adding to its office apps and other products from going off the rails. As she has watched text generators like the one behind the Bing chatbot become more capable and useful, she has also seen them get better at spewing biased content and harmful code. Her team works to contain that dark side of the technology. AI could change many lives for the better, Bird says, but “none of that is possible if people are worried about the technology producing stereotyped outputs.” —K.J.


Yejin ChoiPhotograph: Annie Marie Musselman; AI art by Sam Cannon

Yejin Choi, a professor in the School of Computer Science & Engineering at the University of Washington, is developing an open source model called Delphi, designed to have a sense of right and wrong. She’s interested in how humans perceive Delphi’s moral pronouncements. Choi wants systems as capable as those from OpenAI and Google that don’t require huge resources. “The current focus on the scale is very unhealthy for a variety of reasons,” she says. “It’s a total concentration of power, just too expensive, and unlikely to be the only way.” —W.K.


Margaret MitchellPhotograph: Annie Marie Musselman; AI art by Sam Cannon

Margaret Mitchell founded Google’s Ethical AI research team in 2017. She was fired four years later after a dispute with executives over a paper she coauthored. It warned that large language models—the tech behind ChatGPT—can reinforce stereotypes and cause other ills. Mitchell is now ethics chief at Hugging Face, a startup developing open source AI software for programmers. She works to ensure that the company’s releases don’t spring any nasty surprises and encourages the field to put people before algorithms. Generative models can be helpful, she says, but they may also be undermining people’s sense of truth: “We risk losing touch with the facts of history.” —K.J.


Inioluwa Deborah RajiPhotograph: AYSIA STIEB; AI art by Sam Cannon

When Inioluwa Deborah Raji started out in AI, she worked on a project that found bias in facial analysis algorithms: They were least accurate on women with dark skin. The findings led Amazon, IBM, and Microsoft to stop selling face-recognition technology. Now Raji is working with the Mozilla Foundation on open source tools that help people vet AI systems for flaws like bias and inaccuracy—including large language models. Raji says the tools can help communities harmed by AI challenge the claims of powerful tech companies. “People are actively denying the fact that harms happen,” she says, “so collecting evidence is integral to any kind of progress in this field.” —K.J.


Daniela AmodeiPhotograph: AYSIA STIEB; AI art by Sam Cannon

Daniela Amodei previously worked on AI policy at OpenAI, helping to lay the groundwork for ChatGPT. But in 2021, she and several others left the company to start Anthropic, a public-benefit corporation charting its own approach to AI safety. The startup’s chatbot, Claude, has a “constitution” guiding its behavior, based on principles drawn from sources including the UN’s Universal Declaration of Human Rights. Amodei, Anthropic’s president and cofounder, says ideas like that will reduce misbehavior today and perhaps help constrain more powerful AI systems of the future: “Thinking long-term about the potential impacts of this technology could be very important.” —W.K.


Lila IbrahimPhotograph: Ayesha Kazim; AI art by Sam Cannon

Lila Ibrahim is chief operating officer at Google DeepMind, a research unit central to Google’s generative AI projects. She considers running one of the world’s most powerful AI labs less a job than a moral calling. Ibrahim joined DeepMind five years ago, after almost two decades at Intel, in hopes of helping AI evolve in a way that benefits society. One of her roles is to chair an internal review council that discusses how to widen the benefits of DeepMind’s projects and steer away from bad outcomes. “I thought if I could bring some of my experience and expertise to help birth this technology into the world in a more responsible way, then it was worth being here,” she says. —Morgan Meaker


This article appears in the Jul/Aug 2023 issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Source link

See also  Meet the Lobbyist Subsequent Door
Humans Meet Safe
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Multiple Milestones As New Majority Capital Boosts Entrepreneurship Through Acquisition

September 26, 2023

Getty Images Plunges Into the Generative AI Pool

September 26, 2023

3 Hot Startup Opportunities In Augmented Reality

September 26, 2023

The ChatGPT App Can Now Talk to You—and Look Into Your Life

September 25, 2023
Add A Comment

Comments are closed.

Editors Picks

LG B3 OLED review

July 18, 2023

Elden Ring’s Malenia used to slice and cube you a complete lot extra

August 1, 2022

YouTube, Instagram, Discord appear to pull pages belonging to Illinois shooting person of interest

July 5, 2022

How zero-trust segmentation keeps cyberbreaches from spreading across the enterprise

September 28, 2022

Subscribe to Updates

Get the latest news and Updates from Behind The Scene about Tech, Startup and more.

Top Post

Elementor #32036

The Redmi Note 13 is a bigger downgrade compared to the 5G model than you might think

Xiaomi Redmi Watch 4 is a budget smartwatch with a premium look and feel

Behind The Screen
Facebook Twitter Instagram Pinterest Vimeo YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2025 behindthescreen.uk - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.