Deepfakes and distrust — defending our digital identities
S1E23 | Highlights and impact of this week's top themes in digital security.
Hi, Friends —
✰ Welcome to [our digital disco]! Keep scrolling for this week’s key themes in tech. You can also check out the last newsletter here.
I had an interesting conversation last week about the future of scam artistry.
The discussion revolved around Gilbert Chikli, a con artist who repeatedly scammed $50M sums from some of the world’s most powerful leaders with merely a phone call. Chikli’s rise to infamy speaks to the simple power of social engineering in fraud.
Even more concerning, however, is the increasingly low barrier to entry for wannabe criminals inspired by Chikli. The surge in generative AI, capable of mimicking human-like creations in text, audio, images, and videos — also known as deepfakes — has raised alarms. This synthetic content has become drastically easier to create, elevating fears of an age of "cheap fakes.” We’re essentially seeing the emergence of a game of cat-and-mouse; as one iteration of detection tools emerges, AI developers craft new iterations to elude detection.
Identity and its verification have become the linchpin of our modern world, influencing how we interact, transact, and even decide the fate of nations. Online payments require us to prove who we are, ensuring our financial transactions are secure and protected. Instances of fraud and impersonation are growing concerns (think, $8.8B of losses in the US just last year), making identity verification a crucial defense mechanism. Moreover, as elections face new challenges with AI-generated art and misinformation campaigns, verifying the identities of candidates and voters has become pivotal in preserving the integrity of our democratic processes.
Today we’re exploring some of the trends surrounding scam artists, the deepfake market, and some of the tech seeking to maintain digital trust and security.
Let’s dive in.
☞ The next wave of scams
Last month, cybersecurity experts identified deepfake video technology being marketed for phishing scams on hacker forums. The potential consequences are concerning, as scammers can use these deepfakes to execute extortion, fraud, and social engineering attacks. With rates as low as $20 per minute, the ease and relatively low cost of access raises concerns about the potential widespread use of deepfake video calls.
Developments in AI-driven video manipulation allow hackers to create increasingly convincing fake videos of individuals in real time, making it challenging to distinguish between genuine and fraudulent interactions. This technology raises the alarming prospect of scammers posing as trusted figures (e.g., your boss, cousin, or political representative) during video calls, tricking individuals into revealing sensitive information or making unauthorized transactions.
AI-generated images and videos can also be weaponized to swing election votes and public opinion. In the upcoming year, numerous democracies worldwide are facing significant elections, raising concerns about the potential influence of emerging technologies. AI-driven misinformation campaigns have already made their mark in politics, with instances of deepfake videos being used in election campaigns. Just last week Ron Desantis, a US Republican running against former president Trump, released fake, AI-generated images as an attack against his component. These images can be harnessed for political gain, serving as tools to propagate misinformation and sow confusion among voters. Political campaigns and other actors may leverage AI to create persuasive yet deceptive visuals, further complicating the public's ability to distinguish fact from fiction. As these elections approach, there's a pressing need for robust content moderation, media literacy initiatives, and ethical guidelines to address the potential misuse of AI-generated images, safeguard the integrity of democratic processes, and foster responsible use of this technology.
Recent tests conducted on AI text-to-image generators have revealed that over 85% of prompts for creating misleading or false narratives related to elections were accepted by these tools, indicating a lack of effective content moderation. This accessibility and the low cost of generating misleading information raise concerns about the potential for coordinated disinformation campaigns during the upcoming elections. As 2024 approaches, vigilance and preparedness for the challenges posed by AI-driven misinformation campaigns in elections will be crucial.
☞ What role does identity verification play?
In simple terms, our digital identity encompasses all the information that defines us in the digital realm. It's the online version of ourselves, including our usernames, passwords, biometric data (think, the fingerprint that unlocks your iPhone), and even our online behavior. This digital identity is used for everything from accessing bank accounts to proving our social media identity. Protecting it is crucial to safeguarding our privacy, financial security, and reputation. Yet today’s verification tools have their limits, and are proving insufficient in stopping illicit activity.
Case in point: 1Byte. The Vietnam-based startup is known for developing and selling a suite of Android "stalkerware" surveillance apps that allow perpetrators to access the personal data, conversations, and more without their targets’ knowledge or consent.
1Byte has been illegally selling its technology for years, transferring payments under-the-radar of authorities by creating fake identities. A recent investigation by TechCrunch showed that 1Byte reaped substantial profits in the US by exploiting vulnerabilities in the tech and financial systems. The scheme involves creating fake identities with forged American passports and falsified proof of U.S. residency, allowing the startup to launder customer payments into its controlled bank accounts while maintaining secrecy.
The discoveries into 1Byte’s operations underscore the inadequacies of safeguards against fraud within the tech and financial sectors. It also highlights the thriving but risky business of selling surveillance software, and the challenges posed by combating these operations across global boundaries. As TheTruthSpy continues to operate, it remains a persistent threat to individuals whose phones have been compromised, emphasizing the urgent need for stronger measures to counter such clandestine activities. Strengthening online identity verification processes is essential not only for the protection of individuals, but also for maintaining the integrity and security of digital platforms and services that millions of people rely on daily.
☞ The future of identity verification
As the digital realm continues to expand its influence over our daily lives, traditional methods of verifying identity, such as passwords and PINs, have proven vulnerable to exploitation. Generative AI in particular has opened up a Pandora's box of fraudulent possibilities. From impersonating corporate executives to conducting elaborate phishing schemes, cybercriminals are leveraging these technologies to compromise sensitive information and financial assets.
The generative AI market is projected to surpass $109B by 2030, posing a massive opportunity for companies making — or verifying — AI-generated content. In response to this growing threat, forward-thinking companies are harnessing the power of AI and machine learning to bolster their defenses. They're developing sophisticated identity verification systems that analyze a multitude of factors beyond just passwords and biometrics. In doing so, they seek to fortify the barriers against unauthorized access and fraudulent activities, ushering in a new era of secure and reliable digital identities.
More than a dozen companies have sprung up to address the pressing issue of detecting AI's creative output in a bid to distinguish between human- and machine-generated content. Industry giants such as Intel are developing solutions to identify deepfake videos, while government agencies like the Defense Advanced Research Projects Agency (DARPA) allocate substantial funds for deepfake detection programs. Numerous startups are also tackling Some noteworthy startups to keep an eye out for:
Reality Defender has set out to identify and stop deepfakes. The company offers a platform that scans text, image, video, text, and audio to detect deepfakes and manipulated media. Clients include NATO, the U.S. Department of Defense, Microsoft, and Visa.
Bureau is supporting identity verification by tokenizing identities to a single source of truth: their phone numbers. Aspects of an individual’s identity — digital assets such as emails and device IPs, as well as physical assets including documents and Facematch — are linked to this token. Companies can verify identities through Bureau’s token network.
Worldcoin is a particularly interesting example. Conceived by OpenAI CEO Sam Altman, Worldcoin's identity verification process involves users downloading an app and having their iris scanned — creating a cryptographic "hash" linked to their real identity for future verification. This effort is part of a broader mission that includes introducing a global currency and financial transaction application.☞ Worldcoin's approach to identity verification — linking digital identities to the physical world — might provide the company with an edge over platforms that rely solely on online solutions, as the online world is continuously evolving. Current CEO, Alex Blania, explained: “I fundamentally believe they’re just going to get ripped apart by the next generation of [large language] models [off which ChatGPT was built] that are going to come out over the next 12 to 24 months, because neither digital content nor intelligence will be good enough to discriminate [who is or isn’t human] anymore. You will need something that bridges to the physical world,” he adds. “Everything else will break.”
✿ As always — any and all feedback is welcome! In the meantime, give someone a hug and say an ‘I love you’ this week. Make the world a little happier.