Skip to main content

Configuration & Limits

Configuration

All import settings are defined in config/medusa.php under the import key.

KeyEnv VariableDefaultDescription
batch_sizeIMPORT_BATCH_SIZE50Records processed per API call
upload_concurrencyIMPORT_UPLOAD_CONCURRENCY5Max concurrent browser-to-S3 file uploads
presigned_url_ttl_minutesIMPORT_PRESIGNED_URL_TTL60TTL for presigned PUT URLs (minutes)
retriesIMPORT_RETRIES3Max server error retries before marking as failed
rescue_after_minutesIMPORT_RESCUE_AFTER_MINUTES5Minutes before a stale import is considered stuck
rescue_intervalIMPORT_RESCUE_INTERVAL3Cron interval (minutes) for stale import rescue

Upload Limits

TypeMax SizeAllowed Extensions
Content files500 MB.pdf, .epub, .mp3, .jpg, .jpeg, .png
Metadata spreadsheet10 MB.xlsx, .xls, .csv

S3 Storage

Content files are uploaded directly from the browser to S3 via presigned PUT URLs.

Key pattern:

imports/{contentIntakeId}/{importId}/{sanitized_filename}
  • Filenames are sanitized (special characters replaced with underscores).
  • Uses the intake-s3 disk configured in config/filesystems.php.
  • Presigned PUT URLs expire after presigned_url_ttl_minutes (default 60 min).
  • Presigned GET URLs (generated at processing time for the Farfalla API) expire after 24 hours.

Queue

  • Queue name: import-processing
  • Middleware: WithoutOverlapping — keyed by import ID, expires after 5 minutes, non-blocking (dontRelease)
  • Job timeout: 120 seconds
  • Max tries: 3

Stale Import Rescue

The import:rescue-stale command runs on a schedule (every rescue_interval minutes, default 3) and re-dispatches ProcessImportBatch for imports that:

  • Have status on-queue
  • Have discovered records remaining
  • Were last updated more than rescue_after_minutes ago (default 5)

Known Limitations

  • No update tracking — The system creates new content records in Farfalla. Duplicate external_id values are detected against previously imported records in Medusa and flagged during validation, but the system does not check Farfalla for pre-existing content.
  • No skipped/ignored concept — Records either succeed (done) or fail. There is no intermediate "skipped" status for rows that were intentionally excluded.
  • No incremental re-import — Once an import is done or failed, it cannot be partially re-run. The user must create a new import.
  • Physical products skip file matching — Rows with file_type = physical bypass file URL validation entirely, since they don't require a digital file.
  • Single API endpoint — All records go to Farfalla's /api/v3/content/bulk regardless of file type or tenant configuration.
  • No progress granularity for uploads — Upload progress is tracked per-file (uploading/completed/failed) but not as a percentage within a single file from Livewire's perspective.
X

Graph View