Artificial Intelligence: Authentic Cheats.
Artificial intelligence tools are being used with malicious intent to send “hyper-personalized emails” that are so sophisticated that victims cannot identify that they are fraudulent.
According to the Financial Times, AI bots are compiling information about unsuspecting email users by analyzing “their social media activity to determine which topics they might be most likely to respond to”.
The phishing emails are then sent to users who appear to be family and friends. Due to the personal nature of the email, the recipient is unable to identify that it is actually malicious.
“It’s getting worse and it’s getting very personal, and that’s why we suspect AI is behind a lot of it,” Kristy Kelly, chief information security officer at insurance agency Beazley, told the media.
“We’re starting to see very targeted attacks that have collected a large amount of information about a person.”
“AI is giving cybercriminals the ability to easily create more personalized and convincing emails and messages that appear to be from trusted sources,” security firm McAfee recently warned. “These types of attacks are expected to increase in sophistication and frequency.”
While many savvy Internet users now know the telltale signs of traditional email scams, it’s much harder to tell when these new personalized messages are fraudulent.
Gmail, Outlook and Apple Mail don’t yet have “adequate protections to stop this,” Forbes reports.
“Social engineering,” ESET cybersecurity advisor Jake Moore told Forbes, “has an impressive hold over people because of human interaction, but now that AI can implement the same tactics from a technological perspective, it’s becoming more hard to mitigate if people really start thinking about reducing what they post online.”
Bad actors are also able to use AI to write convincing phishing emails impersonating banks, accounts and more. According to data from the US Cybersecurity and Infrastructure Security Agency and cited by the Financial Times, over 90% of successful breaches start with phishing messages.
These highly sophisticated scams can bypass security measures, and inbox filters designed to screen emails for scams may not be able to identify them, Nadezda Demidova, cybercrime security researcher at eBay, told the Financial Times. .
“The availability of generative AI tools lowers the entry threshold for advanced cybercrime,” Demidova said.
McAfee warned that the year 2025 would bring a wave of advanced artificial intelligence used to “create increasingly sophisticated and personalized cyber fraud,” according to a recent blog post.
Software company Check Point issued a similar forecast for the new year.
“In 2025, AI will run both offense and defense,” said Dr. Dorit Dor, the company’s chief technology officer. “Security teams will rely on AI-powered tools tailored to their unique environments, but adversaries will respond with increasingly sophisticated, AI-driven phishing campaigns and deep forgeries.”
To protect themselves, users should never click on links within emails unless they can verify the legitimacy of the sender. Experts also recommend strengthening account security with two-factor authentication and strong passwords or passkeys.
“At the end of the day,” Moore told Forbes, “whether AI has improved an attack or not, we need to remind people of these increasingly sophisticated attacks and how to think twice before transferring money or we disclose personal information when requested – no matter how credible the request appears.
#Gmail #Outlook #Apple #users #urged #watch #email #scam #Cybersecurity #experts #sound #alarm
Image Source : nypost.com