Conversation
|
@gerrod3 your changes affects the Also, can we have backport to 2.19 |
|
|
||
| # if more chunks | ||
| if range_header: | ||
| chunk = ContentFile(chunk.read()) |
There was a problem hiding this comment.
isn't this getting a larger chunk?
|
@rochacbruno Read my initial comment on the different scenarios of chunked upload. If a PATCH chunk upload is sent that is larger than we can handle and doesn't fall under the special podman case then we are out of luck. Someone needs to fix their client to adhere to the spec or set their chunking size to a smaller value. |
|
The most we can do is maybe introduce a setting that checks the size of the chunk and if over a certain amount return a 4XX error that the chunk is too large for a PATCH request, use a smaller size. |
Backport to 2.19: 💚 backport PR created✅ Backport PR branch: Backported as #2252 🤖 @patchback |
Backport to 2.26: 💚 backport PR created✅ Backport PR branch: Backported as #2253 🤖 @patchback |
Backport to 2.27: 💚 backport PR created✅ Backport PR branch: Backported as #2254 🤖 @patchback |
According to the spec there are two ways to perform monolithic blob pushes:
There's an unofficial third way that podman can apparently do according to our comments which is:
The code should now call our special large chunk handler for all three cases, we were forgetting case number 2. Note that depending on the client they could still cause the server to go OOM if they send ridiculously large chunks in the normal chunked upload path since we read the entire chunk into memory before saving it. There is nothing we can do here as part of the code lives in pulpcore, but honestly what is that client thinking sending such large chunks!
📜 Checklist
See: Pull Request Walkthrough