Skip to content
GoSentrix

AI Security

Articles and resources about AI Security

46 posts in this category

The AI Bill of Rights Explained

The AI Bill of Rights is a framework for developing and using artificial intelligence (AI) technologies in a way that puts people's basic civil rights first.

Wiz Experts Team
AI Security

AI Data Classification: Definition and Process Explained

AI data classification is the process of using machine learning to automatically sort and label data based on its content and sensitivity.

Wiz Experts Team
AI Security

How to Secure AI Models, Data, Pipelines, and Agents in Modern Enterprises

To manage risks associated with AI, organizations need a strategic and well-coordinated security approach that extends traditional cybersecurity measures to the unique needs of AI.

GoSentrix Security Team
AI SecuritySecurity Best Practices

AI runtime security in the cloud: Secure your AI when it matters most

AI runtime security safeguards your AI apps, models, and data during active operation. Going beyond traditional security's focus on static pre-deployment analysis, runtime security monitors AI behavior at inference while it actively processes user requests and sensitive data.

Wiz Experts Team
AI Security

AI Data Security: Key Principles and Best Practices

AI data security is a specialized practice at the intersection of data protection and AI security that's aimed at safeguarding data used in AI and machine learning (ML) systems.

Wiz Experts Team
AI Security

AI-BOM: Building an AI Bill of Materials

An AI bill of materials (AI-BOM) is a complete inventory of all the assets in your organization's AI ecosystem. It documents datasets, models, software, hardware, and dependencies across the entire lifecycle of AI systems—from initial development to deployment and monitoring.

Wiz Experts Team
AI Security

What is LLM Jacking?

LLM jacking is an attack technique that cybercriminals use to manipulate and exploit an enterprise's cloud-based LLMs (large language models).

Wiz Experts Team
AI Security

Data Poisoning: Current Trends and Recommended Defense Strategies

Data poisoning threatens the cloud, especially when 70% of cloud environments use AI services. Learn about the top threats and how to protect your organization.

Wiz Experts Team
AI Security

LLM Security for Enterprises: Risks and Best Practices

LLM models, like GPT and other foundation models, come with significant risks if not properly secured. From prompt injection attacks to training data poisoning, the potential vulnerabilities are manifold and far-reaching.

Wiz Experts Team
AI Security

What is a Prompt Injection Attack?

Prompt injection attacks are an AI security threat where an attacker manipulates the input prompt in natural language processing (NLP) systems to influence the system's output.

Wiz Experts Team
AI Security

AI Security: Using AI Tools to Protect Your AI Systems

AI security involves using AI tools for cybersecurity and protecting your AI systems themselves. Learn how to do both to mitigate evolving AI security risks.

Wiz Experts Team
AI Security

AI Governance: 85% of Orgs Use AI, but Security Lags

AI governance is trailing behind adoption, leaving organizations vulnerable to emerging threats. Learn best practices for securing your cloud environment.

Wiz Experts Team
AI Security

Will AI Replace Cybersecurity? Exploring AI's Evolving Role in Security

The short answer is no, AI is not expected to replace cybersecurity or take cybersecurity jobs. It will, however, augment cybersecurity with new tools, methods, and frameworks.

Wiz Experts Team
AI Security

AI-Powered SecOps: A Brief Explainer

In this article, we'll discuss the benefits of AI-powered SecOps, explore its game-changing impact across various SOC tiers, and look at emerging trends reshaping the cybersecurity landscape.

Wiz Experts Team
AI Security

AI Inventory: Map AI Systems, Data, and Risk

AI Inventory: Map AI Systems, Data, and Risk

Wiz Experts Team
AI Security

AI Code Security Explained

AI Code Security Explained

Wiz Experts Team
AI SecurityApplication Security

AI Security Solutions in 2025: Tools to secure AI

In this guide, we'll help you navigate the rapidly evolving landscape of AI security best practices and show how AI security posture management (AI-SPM) acts as the foundation for scalable, proactive AI risk management.

Wiz Experts Team
AI Security

ChatGPT Security for Enterprises: Risks and Best Practices

ChatGPT security is the process of protecting an organization from the compliance, brand image, customer experience, and general safety risks that ChatGPT introduces into applications.

Wiz Experts Team
AI Security

The Threat of Adversarial AI

Adversarial artificial intelligence (AI), or adversarial machine learning (ML), is a type of cyberattack where threat actors corrupt AI systems to manipulate their outputs and functionality.

Wiz Experts Team
AI Security

How to Secure Autonomous and Semi-Autonomous AI Systems in the Enterprise

AI Agent Security Best Practices

GoSentrix Security Team
AI Security

The Impact of AI in Software Development

AI-assisted software development integrates machine learning and AI-powered tools into your coding workflow to help you build, test, and deploy software without wasting resources.

Wiz Experts Team
AI SecurityApplication Security

AI/ML in Kubernetes Best Practices: The Essentials

Our goal with this article is to share the best practices for running complex AI tasks on Kubernetes. We'll talk about scaling, scheduling, security, resource management, and other elements that matter to seasoned platform engineers and folks just stepping into machine learning in Kubernetes.

Wiz Experts Team
AI Security

7 Serious AI Security Risks and How to Mitigate Them

There are many sneaky AI security risks that could impact your organization. Learn practical steps to protect your systems and data while still leveraging AI's benefits.

Wiz Experts Team
AI Security

What is AI Red Teaming?

Traditional security testing isn't enough to deal with AI's expanded and complex attack surface. That's why AI red teaming—a practice that actively simulates adversarial attacks in real-world conditions—is emerging as a critical component in modern AI security strategies and a key contributor to the AI cybersecurity market growth.

Wiz Experts Team
AI Security

AI CNAPP: Unified Cloud and AI Security

AI CNAPP: Unified Cloud and AI Security

Wiz Experts Team
AI SecurityCNAPP

AI Compliance in 2025: Definition, Standards, and Frameworks

AI compliance standards are changing fast, yet 85% of organizations still use AI tools. Get best practices and frameworks to protect your cloud environment.

Wiz Experts Team
AI Security

Securing Agentic AI: What Cloud Teams Need to Know

Agentic AI security protects AI systems that autonomously make decisions, use tools, and take action in live environments. Agentic AI doesn't just answer questions—it acts on them.

Wiz Experts Team
AI Security

Generative AI Security: Risks & Best Practices

Generative AI (GenAI) security is an area of enterprise cybersecurity that zeroes in on the risks and threats posed by GenAI applications. To reduce your GenAI attack surface, you need a mix of technical controls, policies, teams, and AI security tools.

Wiz Experts Team
AI Security

AWS AI Security: Securing AI workloads on AWS

AWS offers a complete, scalable suite for AI that covers everything from data prep to model deployment, making it easier for developers to innovate quickly.

Wiz Experts Team
AI Security

The EU Artificial Intelligence Act: A tl;dr

In this post, we'll bring you up to speed on why the EU put this law in place, what it involves, and what you need to know as an AI developer or vendor, including best practices to simplify compliance.

Wiz Experts Team
AI Security

How AI Is Transforming Modern Application and Cloud Security — and What Organizations Must Know Before Adopting It

AI vulnerability scanner is a tool that uses artificial intelligence to find and prioritize security weaknesses based on real risk.

GoSentrix Security Team
AI SecurityVulnerability Management

What is Shadow AI? Why It's a Threat and How to Embrace and Manage It

Shadow AI is the unauthorized use or implementation of AI that is not controlled by, or visible to, an organization's IT department.

Wiz Experts Team
AI Security

AI Vulnerability Management Explained

AI Vulnerability Management Explained

Wiz Experts Team
AI SecurityVulnerability Management

NIST AI Risk Management Framework: A tl;dr

The NIST AI Risk Management Framework (AI RMF) is a guide designed to help organizations manage AI risks at every stage of the AI lifecycle—from development to deployment and even decommissioning.

Wiz Experts Team
AI Security

AI Threat Hunting: Benefits, Use cases, and Limitations

AI Threat Hunting: Benefits, Use cases, and Limitations

Wiz Experts Team
AI SecurityThreat Intel

AI Cyberattacks: How attackers target AI, and use AI against you

AI Cyberattacks: How attackers target AI, and use AI against you

Wiz Experts Team
AI Security

Dark AI Explained

Dark AI involves the malicious use of artificial intelligence (AI) technologies to facilitate cyberattacks and data breaches. Dark AI includes both accidental and strategic weaponization of AI tools.

Wiz Experts Team
AI Security

AI SAST: Smarter Static Application Security Testing

AI SAST: Smarter Static Application Security Testing

Wiz Experts Team
AI SecurityApplication Security

AI Threat Detection Explained

AI threat detection uses advanced analytics and AI methodologies such as deep learning (DL) and natural language processing (NLP) to assess system behavior, identify abnormalities and potential attack paths, and prioritize threats in real time.

Wiz Experts Team
AI Security

The role of Kubernetes in AI/ML development

In this blog post, you'll discover how Kubernetes plays a crucial role in AI/ML development. We'll explore containerization's benefits, practical use cases, and day-to-day challenges, as well as how Kubernetes security can protect your data and models while mitigating potential risks.

Wiz Experts Team
AI Security

AI Guardrails: Safety Controls for Responsible AI Use

AI Guardrails: Safety Controls for Responsible AI Use

Wiz Experts Team
AI Security

What is AI-SPM? [AI Security Posture Management]

AI-SPM (AI security posture management) is a new and critical component of enterprise cybersecurity that secures AI models, pipelines, data, and services.

Wiz Experts Team
AI Security

Using AI for Incident Response: A Practical Overview

Using AI for Incident Response: A Practical Overview

Wiz Experts Team
AI SecurityDetection and Response

How AI is Transforming Cloud Security (and how you can leverage it)

AI is transforming cloud security operations by enabling real-time threat detection, automated response, and predictive risk analysis, helping teams stay ahead of attackers.

Wiz Experts Team
AI Security

AI Risk Management: Essential AI SecOps Guide

AI risk management is a set of tools and practices for assessing and securing artificial intelligence environments. Because of the non-deterministic, fast-evolving, and deep-tech nature of AI, effective AI risk management and SecOps requires more than just reactive measures.

Wiz Experts Team
AI Security

7 AI Security Tools to Prepare You for Every Attack Phase

Learn the main advantages and limitations of 7 popular AI security tools. Plus, see the top criteria for choosing a tool to secure your AI and ML applications.

Wiz Experts Team
AI Security