Configuration & Limits
Configuration
All import settings are defined in config/medusa.php under the import key.
| Key | Env Variable | Default | Description |
|---|---|---|---|
batch_size | IMPORT_BATCH_SIZE | 50 | Records processed per API call |
upload_concurrency | IMPORT_UPLOAD_CONCURRENCY | 5 | Max concurrent browser-to-S3 file uploads |
presigned_url_ttl_minutes | IMPORT_PRESIGNED_URL_TTL | 60 | TTL for presigned PUT URLs (minutes) |
retries | IMPORT_RETRIES | 3 | Max server error retries before marking as failed |
rescue_after_minutes | IMPORT_RESCUE_AFTER_MINUTES | 5 | Minutes before a stale import is considered stuck |
rescue_interval | IMPORT_RESCUE_INTERVAL | 3 | Cron interval (minutes) for stale import rescue |
Upload Limits
| Type | Max Size | Allowed Extensions |
|---|---|---|
| Content files | 500 MB | .pdf, .epub, .mp3, .jpg, .jpeg, .png |
| Metadata spreadsheet | 10 MB | .xlsx, .xls, .csv |
S3 Storage
Content files are uploaded directly from the browser to S3 via presigned PUT URLs.
Key pattern:
imports/{contentIntakeId}/{importId}/{sanitized_filename}
- Filenames are sanitized (special characters replaced with underscores).
- Uses the
intake-s3disk configured inconfig/filesystems.php. - Presigned PUT URLs expire after
presigned_url_ttl_minutes(default 60 min). - Presigned GET URLs (generated at processing time for the Farfalla API) expire after 24 hours.
Queue
- Queue name:
import-processing - Middleware:
WithoutOverlapping— keyed by import ID, expires after 5 minutes, non-blocking (dontRelease) - Job timeout: 120 seconds
- Max tries: 3
Stale Import Rescue
The import:rescue-stale command runs on a schedule (every rescue_interval minutes, default 3) and re-dispatches ProcessImportBatch for imports that:
- Have status
on-queue - Have
discoveredrecords remaining - Were last updated more than
rescue_after_minutesago (default 5)
Known Limitations
- No update tracking — The system creates new content records in Farfalla. Duplicate
external_idvalues are detected against previously imported records in Medusa and flagged during validation, but the system does not check Farfalla for pre-existing content. - No skipped/ignored concept — Records either succeed (
done) or fail. There is no intermediate "skipped" status for rows that were intentionally excluded. - No incremental re-import — Once an import is
doneorfailed, it cannot be partially re-run. The user must create a new import. - Physical products skip file matching — Rows with
file_type = physicalbypass file URL validation entirely, since they don't require a digital file. - Single API endpoint — All records go to Farfalla's
/api/v3/content/bulkregardless of file type or tenant configuration. - No progress granularity for uploads — Upload progress is tracked per-file (uploading/completed/failed) but not as a percentage within a single file from Livewire's perspective.