• Tech News
    • Games
    • Pc & Laptop
    • Mobile Tech
    • Ar & Vr
    • Security
  • Startup
    • Fintech
  • Reviews
  • How To
What's Hot

Elementor #32036

January 24, 2025

The Redmi Note 13 is a bigger downgrade compared to the 5G model than you might think

April 18, 2024

Xiaomi Redmi Watch 4 is a budget smartwatch with a premium look and feel

April 16, 2024
Facebook Twitter Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
Facebook Twitter Instagram Pinterest VKontakte
Behind The ScreenBehind The Screen
  • Tech News
    1. Games
    2. Pc & Laptop
    3. Mobile Tech
    4. Ar & Vr
    5. Security
    6. View All

    Bring Elden Ring to the table with the upcoming board game adaptation

    September 19, 2022

    ONI: Road to be the Mightiest Oni reveals its opening movie

    September 19, 2022

    GTA 6 images and footage allegedly leak

    September 19, 2022

    Wild west adventure Card Cowboy turns cards into weird and silly stories

    September 18, 2022

    7 Reasons Why You Should Study PHP Programming Language

    October 19, 2022

    Logitech MX Master 3S and MX Keys Combo for Business Gen 2 Review

    October 9, 2022

    Lenovo ThinkPad X1 Carbon Gen10 Review

    September 18, 2022

    Lenovo IdeaPad 5i Chromebook, 16-inch+120Hz

    September 3, 2022

    It’s 2023 and Spotify Still Can’t Say When AirPlay 2 Support Will Arrive

    April 4, 2023

    YouTube adds very convenient iPhone homescreen widgets

    October 15, 2022

    Google finishes iOS 16 Lock Screen widgets rollout w/ Maps

    October 14, 2022

    Is Apple actually turning iMessage into AIM or is this sketchy redesign rumor for laughs?

    October 14, 2022

    MeetKai launches AI-powered metaverse, starting with a billboard in Times Square

    August 10, 2022

    The DeanBeat: RP1 simulates putting 4,000 people together in a single metaverse plaza

    August 10, 2022

    Improving the customer experience with virtual and augmented reality

    August 10, 2022

    Why the metaverse won’t fall to Clubhouse’s fate

    August 10, 2022

    How Apple privacy changes have forced social media marketing to evolve

    October 16, 2022

    Microsoft Patch Tuesday October Fixed 85 Vulnerabilities – Latest Hacking News

    October 16, 2022

    Decentralization and KYC compliance: Critical concepts in sovereign policy

    October 15, 2022

    What Thoma Bravo’s latest acquisition reveals about identity management

    October 14, 2022

    What is a Service Robot? The vision of an intelligent service application is possible.

    November 7, 2022

    Tom Brady just chucked another Microsoft Surface tablet

    September 18, 2022

    The best AIO coolers for your PC in 2022

    September 18, 2022

    YC’s Michael Seibel clarifies some misconceptions about the accelerator • DailyTech

    September 18, 2022
  • Startup
    • Fintech
  • Reviews
  • How To
Behind The ScreenBehind The Screen
Home»Security»Why your org should plan for deepfake fraud before it happens
Security

Why your org should plan for deepfake fraud before it happens

August 27, 2022No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Why your org should plan for deepfake fraud before it happens
Share
Facebook Twitter LinkedIn Pinterest Email

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Some young people floss for a TikTok dance challenge. A couple posts a holiday selfie to keep friends updated on their travels. A budding influencer uploads their latest YouTube video. Unwittingly, each one is adding fuel to an emerging fraud vector that could become enormously challenging for businesses and consumers alike: Deepfakes.

Deepfakes defined

Deepfakes get their name from the underlying technology: Deep learning, a subset of artificial intelligence (AI) that imitates the way humans acquire knowledge. With deep learning, algorithms learn from vast datasets, unassisted by human supervisors. The bigger the dataset, the more accurate the algorithm is likely to become.

Deepfakes use AI to create highly convincing video or audio files that mimic a third-party — for instance, a video of a celebrity saying something they did not, in fact, say. Deepfakes are produced for a broad range of reasons—some legitimate, some illegitimate. These include satire, entertainment, fraud, political manipulation, and the generation of “fake news.”  

The danger of deepfakes

The threat posed by deepfakes to society is a real and present danger due to the clear risks associated with being able to put words into the mouths of powerful, influential, or trusted people such as politicians, journalists, or celebrities. In addition, deepfakes also present a clear and increasing threat to businesses. These include:

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

See also  Broadcom is acquiring VMware for $61 billion

Register Here

  • Extortion: Threatening to release faked, compromising footage of an executive to gain access to corporate systems, data, or financial resources.
  • Fraud: Using deepfakes to mimic an employee and/or customer to gain access to corporate systems, data, or financial resources.
  • Authentication: Using deepfakes to manipulate ID verification or authentication that relies on biometrics such as voice patterns or facial recognition to access systems, data, or financial resources.
  • Reputation risk: Using deepfakes to damage the reputation of a company and/or its employees with customers and other stakeholders.   

The impact on fraud

Of the risks associated with deepfakes, the impact on fraud is one of the more concerning for businesses today. This is because criminals are increasingly turning to deepfake technology to make up for declining yields from traditional fraud schemes, such as phishing and account takeover. These older fraud types have become more difficult to carry out as anti-fraud technologies have improved (for example, through the introduction of multifactor authentication callback). 

This trend coincides with the emergence of deepfake tools being made available as a service on the dark web, making it easier and cheaper for criminals to launch such fraud schemes, even if they have limited technical understanding. It also coincides with people posting massive volumes of images and videos of themselves on social media platforms — all great inputs for deep learning algorithms to become ever more convincing. 

There are three key new fraud types that security teams in enterprises should be aware of in this regard:

  • Ghost fraud: Where a criminal uses the data of a person who has died to create a deepfake that can be used, for example, to access online services or apply for credit cards or loans.
  • Synthetic ID fraud: Where fraudsters mine data from many different people to create an identity for a person who does not exist. The identity is then used to apply for credit cards or to carry out large transactions.
  • Application fraud: Where stolen or fake identities are used to open new bank accounts. The criminal then maxes out associated credit cards and loans. 
See also  Family Plan currently in testing for Xbox Game Pass

Already, there have been a number of high-profile and costly fraud schemes that have used deepfakes. In one case, a fraudster used deepfake voice technology to imitate a company director who was known to a bank branch manager. The criminal then defrauded the bank of $35 million. In another instance, criminals used a deepfake to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($223,688.30 USD) from the executive’s junior officer to a fictional supplier. Deepfakes are therefore a clear and present danger, and organizations must act now to protect themselves.

Defending the enterprise

Given the increasing sophistication and prevalence of deepfake fraud, what can businesses do to protect their data, their finances, and their reputation? I have identified five key steps that all businesses should put in place today:

  1. Plan for deepfakes in response procedures and simulations. Deepfakes should be incorporated into your scenario planning and crisis tests. Plans should include incident classification and outline clear incident reporting processes, escalation and communication procedures, particularly when it comes to mitigating reputational risk.
  2. Educate employees. Just as security teams have educated employees to detect phishing emails, they should similarly raise awareness of deepfakes. As in other areas of cybersecurity, employees should be seen as an important line of defense, especially given the use of deepfakes for social engineering. 
  3. For sensitive transactions, have secondary verification procedures.   Don’t trust; always verify. Have secondary methods for verification or call back, such as watermarking audio and video files, step-up authentication, or dual control.
  4. Put in place insurance protection. As the deepfake threat grows, insurers will no doubt offer a broader range of options. 
  5. Update risk assessments. Incorporate deepfakes into the risk assessment process for digital channels and services.
See also  Leaked Xbox Game Pass branding suggests family plan can be shared with friends too

The future of deepfakes 

In the years ahead, technology will continue to evolve, and it will become harder to identify deepfakes. Indeed, as people and businesses take to the metaverse and the Web3, it’s likely that avatars will be used to access and consume a broad range of services. Unless adequate protections are put in place, these digitally native avatars will likely prove easier to fake than human beings.

However, just as technology will advance to exploit this, it will also advance to detect it. For their part, security teams should look to stay up to date on new advances in detection and other innovative technologies to help combat this threat. The direction of travel for deepfakes is clear, businesses should start preparing now. 

David Fairman is the chief information officer and chief security officer of APAC at Netskope.

Source link

deepfake Fraud org plan
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Geoffrey Hinton, Godfather of AI, Has a Hopeful Plan for Keeping Future AI Friendly

August 11, 2023

Brace Yourself for the 2024 Deepfake Election

April 29, 2023

The Mystery Vehicle at the Heart of Tesla’s New Master Plan

March 3, 2023

How To Plan A Business Expansion In 2023

March 2, 2023
Add A Comment

Comments are closed.

Editors Picks

The Last of Us TV show gets first yet brief footage

August 23, 2022

Kobo Elipsa 2E review

April 19, 2023

Drone Contraband Deliveries Are Rampant at US Prisons

July 31, 2022

The Risks Of Candidates Climbing Back Down The Corporate Ladder

September 3, 2022

Subscribe to Updates

Get the latest news and Updates from Behind The Scene about Tech, Startup and more.

Top Post

Elementor #32036

The Redmi Note 13 is a bigger downgrade compared to the 5G model than you might think

Xiaomi Redmi Watch 4 is a budget smartwatch with a premium look and feel

Behind The Screen
Facebook Twitter Instagram Pinterest Vimeo YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2025 behindthescreen.uk - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.