User Awareness for AI Agents in NetSuite Security

Understand the risks of AI agents and LLMs in NetSuite and how to manage user awareness effectively.

·3 min read·View Oracle Docs

While AI agents and large language models (LLMs) offer significant benefits, their use can introduce additional risks to organizations. This topic is intended for both end users who interact with AI agents and account administrators responsible for configuring NetSuite and managing this technology within the organization.

This article outlines key risks associated with the use of external AI agents and LLMs, the security controls available in NetSuite, and suggested mitigation strategies. Keep in mind that this list may not be exhaustive or universally applicable, as both technology and associated risks continue to evolve.

What Are the Risks?

The following are key risks inherent to the use of LLMs:

  • Prompt Injection: This occurs when a malicious actor embeds hidden instructions within content processed by the LLM. Such actions can lead to unintended results, like executing unauthorized commands or leaking sensitive data.
  • Hallucination: This term refers to when the LLM generates information that seems accurate but is, in fact, incorrect or entirely fabricated.

Both prompt injection and hallucination can lead to the following consequences:

  • Unintended Actions: The AI agent may run powerful MCP tool functions without the user's explicit intent, potentially leading to significant issues.
  • Corruption of Data: The AI agent might modify or delete data in unintended ways, resulting in data loss or integrity problems.
  • Sensitive Information Disclosure: Unauthorized parties may gain access to sensitive data from NetSuite.

What Controls Are Available in NetSuite?

While some weaknesses linked to prompt injection and hallucination are out of NetSuite's control, the platform offers several controls to help mitigate these risks:

  • Access Control: Account administrators can manage permissions for MCP tools, ensuring that only authorized users can access them.
  • Permission Scopes: MCP tools operate under the same permissions as the NetSuite user interfacing with the external AI agent. This control limits functionalities as needed.
  • Logging: All MCP tool usage is logged for accountability, ensuring there’s traceability for actions taken by AI agents.
  • Explicit User Consent: The OAuth 2.0 authorization flow requires each user to provide explicit consent for every AI agent.

How to Enable External AI Agents in NetSuite?

External AI agents are disabled by default in NetSuite. To enable them, both account administrators and end users need to take specific actions:

Steps for Account Administrators

  • Assign MCP Permissions: Grant appropriate users access to MCP permissions for interacting with AI agents.
  • Install MCP Tools: Make sure relevant MCP tools are installed, which define the actions that AI agents can perform.

Note: Actions available to AI agents are strictly limited to the functionality exposed by the installed MCP tools, ensuring that only users with permissions can call these functions.

Steps for End Users

  • Configure an External AI Agent: Set up the external AI agent within your NetSuite account.
  • Authorize the Agent: Ensure the external AI agent is authorized to act on your behalf.

What Are Effective Mitigation Strategies?

To address the inherent risks of using LLMs, consider the following mitigation strategies:

Vendor and Tool Trustworthiness

  • Only utilize trusted AI agents and tools, and verify their approach to managing risks associated with prompt injection and hallucination.

Access Management

  • Limit Permissions: Give MCP permissions only to those who need it, avoiding high-level permissions for such roles.
  • Role Separation: Create distinct user roles for different MCP tools to narrow the scope of external AI agent capabilities.

Scope Limitation

  • Install only those MCP tools that meet essential business needs, starting with a limited scope when testing new agents.

User Awareness

  • Train end users on the risks and suggested best practices for safely interacting with external AI agents.

Conclusion

User awareness is critical for mitigating risks associated with the use of external AI agents and LLMs in NetSuite. Training end users to recognize potential threats and implement safety practices can significantly enhance security.

Frequently Asked Questions (4)

What specific user permissions are required to interact with external AI agents in NetSuite?
Account administrators must assign MCP permissions to users who need to interact with external AI agents. This ensures that only authorized users can use these tools.
How can account administrators ensure that MCP tool usage by AI agents is traceable?
NetSuite logs all MCP tool usage for accountability, allowing administrators to trace actions taken by AI agents.
Are external AI agents enabled by default in NetSuite?
No, external AI agents are disabled by default in NetSuite. Both account administrators and end users need to enable them by assigning MCP permissions and authorizing the agents, respectively.
What are some effective strategies for limiting the scope of external AI agents in NetSuite?
Effective strategies include using trusted tools, limiting MCP permissions to necessary users only, creating distinct user roles for different MCP tools, and starting with a limited scope when testing new agents.
Source: User Awareness Oracle NetSuite Help Center. This article was generated from official Oracle documentation and enriched with additional context and best practices.

Was this article helpful?

More in Security

View all Security articles →