SINGAPORE: The Cyber Security Agency (CSA) stated on Tuesday (Jul 30) that the development of artificial intelligence (AI) is a trend to watch, with bad actors probably standing to gain as the technology advances and becomes more widely used.
According to the agency, AI is used to improve social engineering and surveillance, among other aspects of cyberattacks. The ever-expanding data repositories that can be utilised to train AI models for better-quality outcomes will probably cause this to rise.
According to the agency’s Singapore Cyber Landscape 2023 report, released on Tuesday, bad actors are using generative AI to detect software vulnerabilities, circumvent biometric authentication, and create deepfake scams.
AI techniques are used to create deepfakes, images and sounds that have been altered or manipulated. Deepfake calls, videos, and images have been used by malicious actors for political or commercial ends.
After the agency and its partners examined a sample of phishing emails seen in 2023, they discovered that roughly 13% of them contained content created by artificial intelligence.
CSA said these emails “had better sentence structure and were grammatically better. Additionally, phishing emails produced by AI or with AI assistance had “better flow and reasoning, intended to reduce logic gaps and enhance legitimacy.” It went on to say that because AI can adapt to any tone, bad actors can take advantage of a wide range of emotions in their targets.
Also Read:
The Attack by Ransomware Serves as a Reality Check for Indonesia’s Digital Ambitions
Primary 1 Registration for 2024: Phase 2B Has 30 Schools That are Oversubscribed