By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
planetstoryline.complanetstoryline.complanetstoryline.com
Notification Show More
Font ResizerAa
  • Home
  • Industry
    • Agri Tech
    • BFSI
    • Clean Tech
    • Ecommerce
    • Edtech
    • Health Tech
    • Logistics
    • Retail/D2C
    • Travel Tech
  • Innovation
    • AR/VR
    • Artificial Intelligence
    • Blockchain
    • Cloud
    • Cybersecurity
    • Internet of Things
  • C-Suite
  • Magazine
  • Press Release
Reading: 5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025
Share
planetstoryline.complanetstoryline.com
Font ResizerAa
  • Beauty
  • Model
  • Lifestyle
Search
  • Home
    • Home 1
  • Categories
    • Beauty
    • Photography
    • Lifestyle
  • Bookmarks
  • More Foxiz
    • Sitemap
Have an existing account? Sign In
Follow US
  • Advertise
  • About Us
  • Contact
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
planetstoryline.com > Innovation > Artificial Intelligence > 5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025
Artificial IntelligenceInnovation

5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025

Planet Storyline
Last updated: 2026/01/13 at 11:47 AM
By Planet Storyline 12 months ago
Share
6 Min Read
5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025
SHARE

Australian cybersecurity professionals can expect threat actors to exploit artificial intelligence to diversify tactics and scale the volume of cyberattacks targeting organisations in 2025, according to security tech firm Infoblox.

Last year, cyber teams in APAC witnessed the first signs of AI being used to execute crimes like financial fraud, while some have linked AI to a DDoS attack in the financial services sector in Australia.

This year, Australia’s cyber defenders can expect AI to be used for a new breed of cyber attacks:

  • AI cloning: AI could be used to create synthetic audio voices to commit financial fraud.
  • AI deepfakes: Convincing fake videos could lure victims to click or provide their details.
  • AI-powered chatbots: AI chatbots could become part of complex phishing campaigns.
  • AI-enhanced malware: Criminals could use LLMs to spit out remixed malware code.
  • Jailbreaking AI: Threat actors will use “dark” AI models without safeguards.

Infoblox’s Bart Lenaerts-Bergmans told Australian defenders on a webinar that they can expect an increase in the frequency and sophistication of attacks because more actors have access to AI tools and techniques.

1
Wrike

Employees per Company Size

Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+)

Medium (250-999 Employees), Large (1,000-4,999 Employees), Enterprise (5,000+ Employees)
Medium, Large, Enterprise

Features

24/7 Customer Support, 360 Degree Feedback, Accounting, and more

2
New Relic

Employees per Company Size

Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+)

Any Company Size
Any Company Size

Features

Analytics / Reports, API, Compliance Management, and more

  1. AI for cloning

Adversaries can use generative AI tools to create synthetic audio content that sounds realistic. The cloning process, which can be done quickly, leverages data available in the public domain, such as an audio interview, to train an AI model and generate a cloned voice.

SEE: Australian government proposes mandatory guardrails for AI

Lenaerts-Bergmans said cloned voices can exhibit only minor differences in intonation or pacing compared to the original voice. Adversaries can combine cloned voices with other tactics, such as spoofed email domains, to appear legitimate and facilitate financial fraud.

  1. AI deepfakes

Criminals can use AI to create realistic deepfake videos of high-profile individuals, which they can use to lure victims into cryptocurrency scams or other malicious activities. The synthetic content can be used to more effectively social engineer and defraud victims.

Infoblox referenced deepfake videos of Elon Musk uploaded to YouTube accounts with millions of subscribers. Using QR codes, many viewers were directed to malicious crypto sites and scams. It took 12 hours for YouTube to remove the videos.

More Australia coverage

  1. AI-powered chatbots

Adversaries have begun using automated conversational agents, or AI chatbots, to build trust with victims and ultimately scam them. The technique mimics how an enterprise may use AI to combine human-driven interaction with the AI chatbot to engage and “convert” a person.

One example of crypto fraud involves attackers using SMS to build relationships before incorporating AI chatbot elements to advance their scheme and gain access to a crypto wallet. Infoblox noted that warning signs of these scams include suspicious phone numbers and poorly designed language models that repeat answers or use inconsistent language.

  1. AI-enhanced malware

Criminals can now use LLMs to automatically rewrite and mutate existing malware to bypass security controls, making it more difficult for defenders to detect and mitigate. This can occur multiple times until the code achieves a negative detection score.

SEE: The alarming state of Australian data breaches in 2024

For example, a JavaScript framework used in drive-by download attacks could be fed to an LLM. This can be used to modify the code by renaming variables, inserting code, or removing spaces to bypass typical security detection measures.

  1. Jailbreaking AI

Criminals are bypassing safeguards of traditional LLMs like ChatGPT or Microsoft Copilot to generate malicious content at will. Called “jailbroken” AI models, they already include the likes of FraudGPT, WormGPT, and DarkBERT, which have no in-built legal or ethical guardrails.

Lenaerts-Bergmans explained that cybercriminals can use these AI models to generate malicious content on demand, such as creating phishing pages or emails that mimic legitimate services. Some are available on the dark web for just $100 per month.

Expect detection and response capabilities to become less effective

Lenaerts-Bergmans said AI threats may result in security teams having intelligence gaps, where existing tactical indicators like file hashes may become completely ephemeral.

He said “detection and response capabilities will drop in effectiveness” as AI tools are adopted.

Infoblox said detecting criminals at the DNS level allows cyber teams to gather intelligence earlier in the cybercriminal’s workflow, potentially stopping threats before they escalate to an active attack.

You Might Also Like

The Growing Positive Impacts of AI on Financial Services

GDPR Law in a Global Economy

6 AI Marketing Trends that Are No Longer Useful in 2025

10 Indian AI Startups Shaping the Future of Technology

6 Reasons Generative AI Projects Fail and How to Make Yours Succeed

Planet Storyline January 13, 2026 January 15, 2025
Share This Article
Facebook Twitter Email Print
Share
Previous Article IoT Tools Topmost IoT Tools Optimizing Resource Management in Hospitals
Next Article AI in Healthcare: Smarter Diagnosis & Care Integrating AI into Healthcare: Enhancing Diagnosis, Treatment, and Public Health
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

planetstoryline.complanetstoryline.com
Follow US
© 2026 Planet Storyline All Rights Reserved.
  • Disclaimer
  • Privacy
  • Advertisement
  • Contact us
  • About Us
Welcome Back!

Sign in to your account

Lost your password?