A10 Unveils AI Firewall and Performance Tools at Interop Tokyo
Organizations worldwide are installing new AI applications and new AI-capable data centers quickly to automate and gain operational excellence within their organizations. This demands ultra-high performance for AI and large language model (LLM) inference environments to provide real-time response, along with new cybersecurity solutions to protect them.
To assist in preparing and safeguarding these new AI environments, A10 Networks (NYSE: ATEN) is showcasing new AI firewall and predictive performance features at the forthcoming Interop Tokyo conference, “AI Steps into Reality,” on June 11-13, 2025.
Preventing, Detecting and Mitigating AI and LLM-level Cyber Threats
A10 is launching new AI firewall functionality that can be used in front of APIs or URLs serving large language models – either as a proprietary LLM or built on top of a commercial solution such as OpenAI or Anthropic. Founded on edge-optimized design with GPU-enhanced hardware, these features safeguard AI LLMs at high performance and can be implemented in any infrastructure as an incremental security feature.
These abilities can prevent, detect and counter AI-level threats by allowing customers to validate their AI inference models against identified vulnerabilities and to assist in removing them with A10’s patented LLM protection methods. The ability identifies AI-level threats such as prompt injections and sensitive data disclosure by examining request and response traffic at the prompt level and applying security policies necessary for threat mitigation.
Providing Real-time Experience for AI and LLM Inference Environments
A10 continues to provide superior performance and resiliency for AI and LLM-powered applications. This is achieved by offloading computationally intensive tasks such as TLS/SSL decryption, caching, optimizing traffic routing and offering actionable insight to optimize network availability and performance.
New features enable early identification of network performance problems, similar to an early warning system. This aids in determining near-term congestion or capacity shortfalls, assisting customers in proactively taking action before a problem becomes severe. The capabilities assist in avoiding unscheduled downtime and scheduling for maximum network performance. Predictive performance will execute on A10 appliances that are GPU-powered, enabling quicker processing speed with the capability of quickly examining enormous volumes of data and offering insight into anomalies in advance.
Collectively, these AI infrastructure and security features provide ease of management, greater intelligence to effectively recognize threats, and assist in delivering the best customer experience.
“Enterprises are training and deploying AI and LLM inference models on-premises or in the cloud at a fast rate. New functionality needs to be created to tackle three major challenges of these new environments: latency, security and operational complexity.”.
With over 20 years of experience in securing and delivering applications, we are expanding our capabilities to deliver on these needs to provide resilience, high performance and security for AI and LLM infrastructures,” said Dhrupad Trivedi, president and CEO, A10 Networks.
For More Information
•Learn more about A10’s strategy for securing AI environments
Follow us on Social Media
•Visit our blog
• Connect with us on LinkedIn and Facebook
Leave a Reply