Chatbots on the rise: AI is now a match for natural ignorance

The hype surrounding chatbots with artificial intelligence is great. However, the advanced technology is also bringing cybercriminals onto the scene, who are exploiting the new possibilities for their machinations. Chester Wisniewski, cybersecurity expert at Sophos, gives a few examples that make you wonder.

Chatbots are already at the point where they can fool a human nearly 100% of the time. (Image: BiancoBlue / Depositphotos.com)
The chatbot based on artificial intelligence ChatGPT is making headlines around the world - and in addition to the news in the stock market and copyright environment, IT security is also the focus of discussions. This is because the recently realized, wider availability of the tool, despite all the security efforts of the manufacturer, brings new challenges when it comes to phishing lures or dialog-oriented scams such as romance scams via social networks or business-oriented compromise attacks via e-mail.

Chatbots as helpers of cyber criminals

"One of the biggest risks is that attackers are using these platforms to significantly improve the quality of their phishing lures. This makes phishing attacks increasingly difficult to identify, even for alert users," said Chet Wisniewski, cybersecurity expert at Sophos. "Ultimately, the ever-improving AI chatbots provide a free upgrade for all kinds of social engineering attacks. Programs like ChatGPT can be used to create criminally oriented, highly realistic, interactive conversations via email or to launch chat attacks via Facebook Messenger, WhatsApp or other chat apps. Today, the greatest threat is to English-speaking audiences. However, it is probably only a matter of time before new versions are available to create credible texts in all the world's commonly spoken languages. We've reached a stage where humans are increasingly unable to distinguish machine-generated prose from that written by humans - especially if we don't know the other person well."

Scams - "optimized" by chatbots

Gone are the days when companies could rely on employees to play an active role in defending against cyberattacks by checking for grammatical and spelling errors or odd phrasing. One of the last effective elements of training programs is being undermined by chatbots like ChatGPT - and very effectively, as the following two examples illustrate. The original bait text from a BEC (Business Email Compromise) scammer attempting to redirect someone's paycheck to the attacker's account is relatively easy to recognize as "not real". In comparison, below is a variant written by entering the appropriate thematic key points from ChatGPT. Standard scam email: Hi ...., I changed my banks because of my old account is no longer good to accept dd payment and I'll like to change my paycheck direct deposit details, can the change be effective for the current pay date? Thanks, ....   Text written by ChatGPT: Dear HR,  I am writing to request that you update my direct deposit information in the company's payroll system. My current bank account information is no longer valid and I will be switching to a new bank.  Could you please let me know what information you need in order to update my direct deposit information? I am happy to provide any necessary details.  I would also like to request that this update be made in time for the next payroll period. Is there anything else I need to do to ensure a smooth transition?  Thank you for your attention in this matter. I appreciate your help in ensuring that my paychecks are deposited into the correct account. Sincerely, [Your name]   And the following example of gift card fraud also quickly illustrates just how good AI chatbots have become. Standard scam email: Please, I need you to purchase gift cards. We have a few clients I want to send gifts to. How quickly can you arrange these? Because I need to send them out in less than 1 hour. So can I tell you which product we will need and the amount? Thanks. Text written by ChatGPT: Hey [Employee] I hope you're doing well. I'm reaching out because I need your help with something really important. I have a client who needs a gift card ASAP, and I don't have time to get it myself. Would you be able to pick up a gift card from [store or e-gift card website] for [amount] and send it to me via email as soon as possible? I know this might be a bit of inconvenience, but I really appreciate your help in getting this taken care of quickly. If you have any questions or need more info, just let me know. Thanks a ton for your help. Take care, [Your name] Wisniewski on his "creative work" with the chatbot: "Both examples sound like a real person's email, have good punctuation, spelling and grammar. Are they perfect? No. Are they good enough? Absolutely! With scammers already making millions from their poorly crafted lures, it's easy to imagine the new dimension of this AI-pushed communication. Imagine chatting with this bot on WhatsApp or Microsoft Teams. Would they have recognized the machine?"

"Putting the nail in the coffin of end-user security awareness"

The fact is that almost all types of AI applications have already reached a point where they can fool a human in almost 100% of cases. The quality of the "conversation" that can be had with ChatGPT is remarkable, and the ability to create fake human faces that are almost indistinguishable (to humans) from real photos, for example, is also already a reality. The criminal potential of such technologies is immense, as one example makes clear: criminals who want to run a scam through a bogus company simply generate 25 faces and use ChatGPT to write their biographies. Add a few fake LinkedIn accounts and you're ready to go. Conversely, the "good side" must also turn to technology in order to stand up to them. "We all need to put on our Iron Man suits if we want to brave the increasingly dangerous waters of the internet," says Wisniewski. "It increasingly looks like we will need machines to recognize when other machines are trying to deceive us. An interesting proof of concept has been developed by Hugging Face, which can recognize text generated with GPT-2 - suggesting that similar techniques could be used to recognize GPT-3 output." "Sad but true: AI has put the final nail in the coffin of end-user security awareness. Am I saying we should stop using it altogether? No, but we need to lower our expectations. It certainly doesn't hurt to follow the IT security best practices that have been and often still are in place. We need to encourage users to be even more suspicious than before and, above all, to scrutinize even error-free messages that contain access to personal information or monetary elements. It's about asking questions, asking for help and taking the few extra moments necessary to confirm that things really are as they seem. It's not paranoia, it's the will not to be taken in by the crooks." Source: Sophos

This article originally appeared on m-q.ch - https://www.m-q.ch/de/chatbots-auf-dem-vormarsch-ki-ist-jetzt-der-natuerlichen-ignoranz-gewachsen/

More articles on the topic