Cybersecurity and AI: The Bad
Artificial intelligence is being pushed to new limits with the release of programs like Chat GPT. AI and cybersecurity have intersected for years. Last month, we talked about how cybersecurity professionals can use these programs to help protect against scams. This month, however, we will focus on the dark side of AI and cybersecurity.
- Scam phone calls
A mother in Arizona was terrified when she received a call from an unknown number and heard her daughter’s voice crying on the other side. She then heard a man demanding a $1 million dollar ransom in exchange for her daughter’s safety. The mother kept the man on the phone while a friend confirmed that her daughter was safe at home. But how did she hear her daughter’s voice on the phone?
Scammers are now using AI software to create voice clones. These programs used to need large samples to create a duplicate voice. Now, they only require a 3-second recording. The criminals will scan social media for voice clips they can duplicate. Typically, they can find enough information on these platforms to create a convincing scam.
- Creating new malware strains
A security researcher playing around with Chat GPT asked the software to write malicious code. While the AI software is supposed to have filters in place to prevent malicious use, the researchers were able to get around the filters and the software spit out a piece of malware. Of course, the code would still need to be tested and deployed but it took seconds to create a piece of potentially malicious code.
Experts say this AI-created code is actually more difficult for anti-malware programs to detect. The code also has the ability to update itself if updates render it useless. Hackers have been using AI for years to create attacks, but each day it is becoming easier and easier.
- Targeted cyber attacks
AI can also be used to create convincing, targeted phishing attacks. Like in the voice scam discussed earlier, AI programs can be used to impersonate human behavior and create convincing text messages or emails. For example, AI could read through 20 messages sent from one person and create a message based on the language the person typically uses. This could make a malicious request appear normal to an unsuspecting user.
It is important that we continue to be vigilant when it comes to our cybersecurity. Remember the cybersecurity principles to help keep yourself safe—even as software evolves.
Cybersecurity shorts
Apple warns its users of potential scam. Apple recently issued a warning to iPhone users about suspicious texts and informs users about what their next steps should be. The tech company also launched a webpage dedicated to protecting its user from scammers. These types of attacks are often the starting point of a more advanced attack because it helps grant attackers access to your apps and data while posting as a legitimate source. Read more about Apple’s alert here.
Oakland, California hit with a second ransomware attack. The ransomware gang, Play, which was linked to the February attack on the City of Oakland, California, published a second surplus of stolen municipal data earlier this month. This stolen data released about 600 gigabytes of files which potentially exposed sensitive information on thousands of city employees. The city of Oakland may now be facing legal fallout from the attack due to the dumps involving police personnel disciplinary records and other rosters of city employees.
Google revealed global spyware campaigns targeting Android and Apple. Google’s Threat Analysis Group recently revealed two “limited and highly targeted” spyware campaigns that took advantage of zero-day vulnerabilities in addition to known but unpatched security holes that undermine protections on Android and Apple iOS devices as well as Google Chrome. These revelations came days after the US government announced an executive order prohibiting federal agencies from using commercial spyware that could present a national security risk. You can read more about it here.
Microsoft reveals AI-powered security. This month, Microsoft made clear that it is aiming to make the idea of using generative AI to thwart cyberattacks a reality with the release of Microsoft Security Copilot. Microsoft Security Copilot tailors the generative AI technology toward cybersecurity by combining GPT-4 with Microsoft’s own security-focused AI model. This article outlines 5 things you should know about Microsoft Security Copilot.
Could a TikTok ban negatively impact the nation’s cybersecurity? TikTok is not the first app to be scrutinized over the potential exposure of US user data. However, it is the first widely used app that the US government has proposed banning over privacy concerns. Currently, the discussion remains on whether TikTok should be banned and there has been little to no discussion on the effects of cybersecurity that a TikTok ban could cause. This cybersecurity researcher and professor at Rochester Institute of Technology explains the potential risks the US face if the ban is decided on.
Software updates
Apple: Apple released emergency updates for two security issues this month impacting iPhones and iPads. Both of these issues are currently being exploited so you should update your devices as soon as possible. Learn more here.
Microsoft: 100 security flaws are fixed in this month’s Microsoft update. Seven of these are considered critical. You can learn more here.
Source: Horsesmouth