1. ingest large data resources
2. run tasks with the resulting data models
For "catalog minimal" we need to make sure large files can be processed with the current backend (the frontend does not yet have to handle them).
- clarify what is the largest file we should reasonably assume based on data imported during the last year
- make sure we not only have large, but also complex test files
- check what backend services need to be called for a test in which order
- try to build a script calling these services in the correct order
- check if this can be conveniently done
- run the tests (using the script) with the necessary files
- list of files with description (size/type/expected import frequency) and the measured import, transformation and export times
- which of the times are constant/increase linear/increase worse?
- conclude if the times are acceptable