ChatGPT could increase ‘threat vector’ for cyberattacks and misinformation, experts warn: ‘Deepfake generator’

In this article, you will get all the information regarding ChatGPT could increase ‘threat vector’ for cyberattacks and misinformation, experts warn: ‘Deepfake generator’

Speaking with Fox News Digital, cyber security and artificial intelligence (AI) experts warned that tools such as ChatGPT could reduce the time and resources needed for criminals to engage in cyber attacks and misinformation campaigns.

– Advertisement –

Kasaia’s chief information security officer Jason Manar said AI is particularly troubling because of its ability to develop completely fake videos and images of real people, known as deepfakes.

In 2022, the Department of Homeland Security (DHS) released a document highlighting the “growing threat of deepfake identities” to national security, featuring former President Obama, Meta’s Mark Zuckerberg, podcaster Joe Rogan, actor Tom Cruise and others. Examples of featured spoof videos are included. , Many of these videos have been viewed millions of times.

advertisement

Manar asked to imagine a political campaign where you had candidates with the right backgrounds and even a deep mockup of the right people for a particular event. Now imagine that this politician says something wrong or divisive, something that will hurt the campaign. Now imagine this happens a week before the election.

AI vs job market: Tech roles thought ‘immune’ to automation could face disruption, say experts

“You get to a point where you either believe everything you see or you have what’s called a false dividend, where maybe it’s actually true, but because of these deep fakes, the person comes out and ‘Not at all it’s a deep fake’, and then the liar gets the benefit of the doubt,” Manar said. “It becomes really difficult to prove what we see with our eyes and a recording.”

Dr. Chris Mattman, NASA JPL’s Chief Technology and Innovation Officer and Division Manager on Artificial Intelligence, pointed to recently circulated images of former President Trump and the Pope wearing a Balenciaga coat as examples of how artificial intelligence can help him. A fool can deepfake the internet. the user.

Mattman, whose work was previously funded by the DHS and the Defense Advanced Research Projects Agency (DARPA), said the impact of these deepfakes will vary from person to person and on older generations or people with busy lives. You may be less likely to understand fact than fiction. successfully.

He also said that these types of deepfakes are likely to increase in frequency and come in more realistic renditions with the help of tools such as ChatGPT, which Mattman described as a “widely available and easy-to-use deepfake generator”. .

According to a UBS study, ChatGPT reached 100 million monthly active users in January, making it the fastest growing consumer application ever. With that in mind, Mattman says the tool’s broad appeal spans the “threat vector space.”

“Now anyone can do this for $20 a month,” Mattman said.

He added, “Any one of them becomes a media company with the potential to put out large amounts of material that doesn’t really exist. It’s wild from a national security perspective.”

Pope Francis declares AI learning a ‘positive’ for the ‘future of humanity’ if used ‘ethically and responsibly’

illustration of a person writing code

Neatsun Ziv, co-founder and CEO of OX Security, the first end-to-end software supply chain security solution, agreed that the mainstream trend of ChatGPT giving millions of people access to their data and capabilities could lead to trouble.

“If we think that the era of fake news on Twitter and Facebook is really bad, I think we’re on to another level right now,” he said.

For example, ChatGPT’s dataset includes knowledge that already exists in the market. If a company is not aware of the knowledge of these machines and is unable to build systems to detect it, it can leave them susceptible to attacks, Neatsun said.

“It’s only a matter of time until they get hacked. That’s what we’re seeing on a weekly basis right now,” he added.

Niteson also said that once these tools are fully understood and acquired by cybergangs, they are more likely to misuse and destroy companies dealing with insurance and fraud, as well as critical government infrastructure. Will be able to target.

“You can write code by describing what you want to do,” he said. “So the effort to create new things, new code, new malware, new attacks is being reduced and it’s not only reduced in time, but it’s also reduced in complexity.”

Recently, ChatGPT spawned a new variant of the polymorphic malware.

According to a technical report by CyberArk security researchers, the malware can easily “evade security products and make mitigation cumbersome with very little effort or investment by the adversary.”

To create this malware, the team bypassed content filters that prevent chatbots from being dangerous tools and ask a single question in a more official fashion or by using an API version of the program.

Everything you need to know about Artificial Intelligence: What is it used for?

“You can do a lot of cool things with artificial intelligence at scale,” said Eric Noonan, CEO of CyberSheath. “On the other side of that coin, a bad actor can do bad things more efficiently and on a larger scale.”

“A lot of times, it doesn’t really matter where the attack came from, whether it’s a nation-state and where it came from, be it artificial intelligence or traditional means, because as a defender, in that moment, What you’re focusing on is really defense, recovery and resilience,” he said.

Noonan, who served on the council’s cybersecurity expert panel, said AI will become a more important concern as it matures. But, right now, Noonan highlights the importance of making sure critical areas are adequately protected and have proper mitigation strategies in place.

“As we’ve seen breaches, whether it’s Office of Personnel Management, solar wind, Colonial Pipeline, we know these critical infrastructure areas are vulnerable and can be breached today and so AI is potentially another The tools the adversary could use scale to be more effective,” he said.

Manar said most attacks are caused by management or configuration changes and human error, such as when an employee clicks on a phishing or whaling attempt. Phishing and whaling attacks are usually targeted at senior executives and masquerade as legitimate emails.

ChatGPT Homescreen

Manar, who previously served as assistant special FBI agent in charge of overseeing all cyber, counterintelligence, intelligence and language services programs for the San Diego office, said that if access to a sample of a person’s writings or emails If it can, the AI ​​can tailor that attack to however the target is used to communicate or obtain information.

AI pause gives time to catch ‘bad guys’, says Bill Ackman: ‘I don’t think we have a choice’

“AI will help tremendously in this by crafting intelligent, precise, targeted responses,” he said.

As an example, Manar noted that while working in healthcare fraud for the FBI, criminals often fell under the government’s radar because they used CMS billing codes that were out of the ordinary, very expensive. and were not often used.

He said that today one can ask ChatGPT for the most commonly used home healthcare billing codes for CMS, which will give you the top five.

Manar said, “It’ll help you stay safe from obscurity, won’t it? You’re going to join the crowd.” “It makes the criminal element a little smarter and gives them information that they might not otherwise have or require an amount of research that they wouldn’t normally put into it.”

Noonan also notes that even “data poisoning” of a benevolent AI system within a company by a bad actor can shut down a business in the same way that an attack on a payroll system can shut down the manufacturing side of a business.

“If they are using AI to make decisions in their business and in critical infrastructure areas, then AI is another vulnerability,” he said. “So, they may be using it for an imminent commercial purpose. But if an adversary can penetrate the corrupt AI to make nefarious decisions, that’s another risk that I don’t know has been given much thought.”

To prevent these types of attacks, Noonan said setting up and enforcing mandatory minimum cyber security requirements is paramount.

Noonan also pointed to President Biden’s May 12, 2021, Executive Order on Cyber ​​Security, which emphasizes implementing a “zero trust” architecture and helping government entities improve cyber threat reporting, is a good first step.

ChatGPT could increase ‘threat vector’ for cyberattacks and misinformation, experts warn: ‘Deepfake generator’

For more visit ReportedCrime.com

Latest News by ReportedCrime.com

Leave a Comment

%d bloggers like this: