Large language models may boost the capabilities of novice hackers but are of little use to threat actors past their salad days, concludes a British governmental evaluation
Get Started for FREE
Sign up with Facebook Sign up with X
I don't have a Facebook or a X account
Your new post is loading...
Carson Harris's curator insight,
May 2, 2023 12:04 PM
We already knew that A.I. could be used for misinformation and biased information, but in this article, we read how hackers are using the technology to their benefit. With just about everything in this world there are negative consequences, and this is just one of them for artificial intelligence. However, with time I feel like the developers behind A.I. will start to implement ways to prevent the technology from being an accomplice. I am sure there may be some government intervention to speed up the process or force some A.I. developing companies to stop the technology from being a partner in crime. For now, we have to be aware that hackers are using A.I. to generate content to be used in emails, texts, and even websites. This content is so specific and close to the actual branding of real businesses that it is hard to tell the difference, and this makes it easy to fall into the trap of giving information into the wrong hands. I saw this coming if I am being honest. The older generation today is easily tricked by spam calls alone but for the younger generation, it is much more difficult to trick. We are aware of the issue and we can spot differences and inconsistencies in hackers' schemes, so of course they start using A.I. to spear-phish and make it harder for us to tell the difference between what is real and what is malicious.
Scott Fuhriman's curator insight,
April 25, 2019 4:05 PM
The cat-and-mouse will continue between organizations and malicious actors both utilizing AI to protect and target. I see one impact to this endless game is an automated increase in game play speed. Will speed in the AI run world lead to automated mistakes? |
|