Best Practices for AI and Automation
This guide provides comprehensive recommendations for working effectively with AI tools and creating maintainable automation workflows.
🤖 AI Best Practices
Designing Effective Prompts
To get the best results when using our AI systems, consider these practices when designing your prompts:
- Be specific and clear: The more specific your prompt is, the better results you'll get
- Provide context: Include relevant information that helps the model understand the context
- Use examples: Examples help the model understand the format and style you expect
- Iterate and refine: Don't expect perfect results on the first try, refine your prompts based on the results
Managing Tasks with AI
Our AI-based task management tools work best when you:
- Clearly define the objectives of each task
- Provide specific acceptance criteria
- Specify the task type (bug, feature, improvement, etc.)
- Review and edit the generated descriptions
AI Integration in Workflows
When incorporating AI into automation workflows:
- Always validate AI-generated content before using it in production systems
- Implement fallback mechanisms for when AI services are unavailable
- Monitor AI usage and costs to optimize performance
- Store AI decisions and reasoning for audit purposes
🔄 Automation Workflow Best Practices
🎯 Objectives
Systematize the creation of workflows in n8n to be clear, maintainable, traceable, and scalable. This applies to both third-party integrations and internal automations.
🧱 General Structure of the Workflow
Every workflow should be divided into three main blocks:
- Input: Capture of inputs or triggers.
- Processing: Validations, logic, and data formatting.
- Output: Final deliveries such as notifications, writing to systems, or responses to APIs.
📏 Base Rules
1. ZTR (Zero Trust Relay)
Zero Trust Relay is a security approach that treats every data transfer point as a potential risk. Like a relay race where each runner must properly receive and pass the baton, ZTR ensures each step in the workflow properly validates and handles data.
Key principles:
- Always format inputs and outputs with
CodeorSetnodes. - Never trust the incoming structure without validating or normalizing it.
- Treat each data transfer point as a security boundary that requires validation.
- Assume that data could be malformed or malicious at any point.
This approach helps prevent:
- Data corruption from unexpected formats
- Security vulnerabilities from malformed inputs
- Cascading errors through the workflow
- Integration issues between different systems
2. Separation between blocks (Rules among rules)
A functional block is a group of nodes that work together to perform a specific task or function within the workflow. Each block should have a clear, single responsibility.
Examples of common functional blocks in n8n:
-
Email Processing Block
- Gmail Trigger node (watches for new emails)
- Filter node (checks email subject/sender)
- Set node (normalizes email data structure)
Input Block → [Gmail] → [Filter] → [Set] → Next Block -
Error Handling Block
- IF node (checks for error conditions)
- Error Trigger node (catches workflow errors)
- Slack node (sends error notifications)
Previous Block → [IF] → [Error Trigger] → [Slack] → End -
Data Enrichment Block
- HTTP Request node (calls external API)
- Function node (processes API response)
- Set node (standardizes data format)
Previous Block → [HTTP Request] → [Function] → [Set] → Next Block -
Notification Block
- Switch node (determines notification type)
- Slack/Email/SMS nodes (sends notifications)
- Set node (formats notification data)
Previous Block → [Switch] → [Slack/Email/SMS] → [Set] → End
Best practices for functional blocks:
- Use a normalization node (
CodeorSet) between each functional block. - Each block should have clear input and output data structures.
- Keep blocks focused on a single responsibility.
- Document the expected data format for each block.
3. Use of Execution Data
- Store important data such as:
- User IDs
- Timestamps
- Emails
- Tenant IDs
- Allows debugging and traceability without searching external logs.
4. Descriptive names and comments
- All nodes must have clear and consistent names.
- Include explanatory comments in nodes with complex logic.
5. Logs and error handling
- Use
Catchnodes for expected errors. - Log important events for operational visibility.
For comprehensive error tracking and monitoring, we recommend using Sentry in your workflows:
Error Capture Strategy:
-
Configure Sentry DSN in your environment variables
-
Use Function nodes to format error data for Sentry
-
Include contextual information in error reports:
// Example of Sentry error formatting in Function node
{
error: item.error,
level: 'error',
tags: {
workflow_id: $workflow.id,
tenant_id: item.tenant_id,
environment: $env.environment
},
extra: {
input_data: item.json,
error_context: item.error_details
}
}
Common Error Scenarios to Monitor:
- API Integration failures
- Data validation errors
- Authentication/Authorization issues
- Timeout errors
- Rate limit exceeded
Best Practices:
- Group similar errors using Sentry fingerprinting
- Set appropriate error severity levels
- Include relevant breadcrumbs for error context
- Configure alert rules based on error frequency and impact
- Link errors to specific workflow runs for easier debugging
Example Error Handling Block:
[Error Trigger] → [Function: Format Error] → [HTTP Request: Sentry] → [Slack Notification]
6. Parameterization
- Avoid hardcoding sensitive or repetitive data.
- Use configuration nodes or reusable environment variables.
📝 Expected Documentation for Each Workflow
- Objective of the workflow.
- Description of the blocks (input, processing, output).
- Applied rules (ZTR, execution data, separation between blocks, etc.).
- Examples of using
Execution Data, if applicable. - Details of validations, logs, and error handling.
🔗 Integration Guidelines
When combining AI and automation workflows:
- Data Flow Validation: Ensure AI outputs are properly validated before being passed to automation systems
- Error Handling: Implement specific error handling for AI service failures
- Performance Monitoring: Track both AI response times and automation execution times
- Version Control: Maintain versions of both AI prompts and automation workflows
- Testing: Test AI-automation integrations with various input scenarios
This framework allows any team member to interpret, maintain, and scale both AI tools and automation workflows consistently.