AI Agents and Risk Management in NetSuite

AI agents and LLMs pose risks like prompt injection and hallucination. Learn mitigation strategies and controls within NetSuite.

·3 min read·View Oracle Docs

AI agents and large language models (LLMs) can bring significant benefits to organizations, but they also introduce various risks. This article outlines the key risks associated with their use, available security controls in NetSuite, and recommended mitigation strategies, making it crucial for end users and account administrators alike.

Key Risks of AI Agents and LLMs

What are the Key Risks Inherent in LLMs?

  1. Prompt Injection: This occurs when malicious actors embed hidden instructions within content processed by the LLM, leading the AI agent to execute unauthorized commands or inadvertently disclose sensitive data. Such instructions can be embedded in various formats, including PDF documents and web pages.

  2. Hallucination: In this context, hallucination refers to the generation of seemingly accurate but entirely fabricated information by the LLM.

Consequences of These Risks

Both prompt injection and hallucination can lead to serious consequences, including:

  • Unintended Actions: The AI agent might execute powerful MCP tool functions, such as making payments or granting approvals, without explicit user intent.
  • Data Corruption: The agent could inadvertently modify or delete data, causing potential data loss.
  • Sensitive Information Disclosure: There’s a risk of unauthorized access to sensitive data from NetSuite.

Security Controls in NetSuite

While prompt injection and hallucination are challenges inherent to LLMs and beyond NetSuite's control, the platform offers several security controls for users and administrators:

  • Access Management: Administrators control which users are granted access to MCP tools. By default, all users have no access, necessitating explicit permission.
  • Permissions: MCP tools operate under the same permissions as the user utilizing the external AI agent, ensuring that powerful functions can’t be executed by unauthorized users.
  • Function Limitations: MCP tools are limited in scope; they cannot perform HTTP requests to external servers or invoke Suitelets, among other restrictions.
  • Usage Tracking: All MCP tool activities are logged, providing traceability for the actions performed by the AI agent.
  • Authorization Control: During OAuth 2.0 flows, explicit user consent is obtained for each AI agent's actions.
  • Scoped Actions: End users can limit MCP tools available to an AI agent by defining the tools' namespace.

Steps to Enable External AI Agents in NetSuite

Enabling external AI agents requires coordinated actions:

For Account Administrators

  • Assign MCP Permissions: Grant necessary permissions to users using the feature.
  • Install MCP Tools: Make sure to install the relevant MCP tools defining actions for AI agents.

For End Users

  • Configure the AI Agent: Set up and authorize the external AI agent to operate under your NetSuite account.

Mitigation Strategies for Risks

While risks from prompt injection and hallucination cannot be fully eliminated, several strategies can help mitigate these risks:

  1. Vendor and Tool Trustworthiness: Utilize only trusted AI agents and find out how they manage prompt injection and hallucination concerns.
  2. Access Management: Limit MCP permissions strictly to necessary users and roles. Regularly review permissions and avoid granting high-level access to these tools.
  3. Scope Limitation: Enable only essential MCP tools for your business needs, and start with a limited scope when trying new agents or tools.
  4. User Awareness: Train end users on the risks associated with AI agents and best practices for security.
  5. Technical Safeguard: Use MCP tools in secure environments, particularly those connecting to local or external systems.

Compliance Risks

Organizations must be aware of regulatory limitations or restrictions that could affect the use of AI tools in various scenarios, particularly in fields like HR or finance.

Source: This article is based on Oracle's official NetSuite documentation.

Key Takeaways

  • AI agents can pose risks such as prompt injection and hallucination.
  • NetSuite includes robust security controls for managing AI agent access.
  • Training end users and limiting permissions can help mitigate potential risks.
  • Complying with regulatory standards is essential for safe AI tool usage.

Frequently Asked Questions (4)

Are there specific permissions required to enable AI agents in NetSuite?
Yes, account administrators need to grant MCP permissions to users utilizing the AI feature to ensure that only authorized users can access and control the AI agent functionalities.
How can administrators mitigate the risks associated with prompt injection and hallucination in NetSuite AI agents?
Administrators can mitigate these risks by limiting MCP permissions to only necessary users, regularly reviewing access levels, and utilizing only trusted AI agents that manage prompt injection and hallucination effectively.
Does NetSuite allow AI agents to make HTTP requests to external servers?
No, MCP tools within NetSuite are restricted and cannot perform HTTP requests to external servers. This is part of the security controls in place to prevent unauthorized data access or modification.
What logging capabilities does NetSuite provide for monitoring AI agent activities?
NetSuite logs all activities performed by MCP tools, providing administrators with traceability for the actions executed by AI agents. This helps track and audit actions for compliance and security purposes.
Source: Risks Oracle NetSuite Help Center. This article was generated from official Oracle documentation and enriched with additional context and best practices.

Was this article helpful?

More in Security

View all Security articles →