Hi folks,
I’m seeing consistent failures with the @replit/object-storage
JS SDK when using uploadFromStream
.
What’s happening
-
Small streams (e.g., 1 MB) are fully read by the SDK, then fail during finalize with:
StreamRequestError: Error: Error code undefined
-
Larger streams fail mid-stream around exactly 4 MB processed.
-
Falling back to
uploadFromFilename
(temp-file path) works reliably on the same data.
This is on a standard Node.js Replit environment using the latest @replit/object-storage
. Replit status shows no outage.
Minimal repro
To rule out browser/network issues, I used a synthetic Readable and a byte-counter Transform to confirm how many bytes the SDK consumed before failing:
import { Client } from '@replit/object-storage';
import { Readable, Transform } from 'stream';
async function uploadSynthetic(sizeMB: number) {
const client = new Client();
// Bucket init via env as usual (omitted here)
// await (client as any).init(process.env.REPLIT_OBJECT_STORAGE_BUCKET_ID);
// 1MB chunk source
let remaining = sizeMB;
const src = new Readable({
read() {
if (remaining-- <= 0) return this.push(null);
this.push(Buffer.alloc(1024 * 1024));
}
});
// byte counter
let seen = 0;
const tap = new Transform({
transform(chunk, _enc, cb) { seen += chunk.length; cb(null, chunk); }
});
src.pipe(tap);
try {
await client.uploadFromStream(
`debug/stream-${Date.now()}-${sizeMB}MB.bin`,
tap as any,
{ contentType: 'application/octet-stream', contentLength: String(sizeMB * 1024 * 1024) } as any
);
console.log('OK, bytes seen:', seen);
} catch (e) {
console.error('FAILED, bytes seen:', seen, 'error:', (e as any)?.message || e);
}
}
// Examples:
// await uploadSynthetic(1); // reads 1,048,576 bytes then fails at finalize
// await uploadSynthetic(10); // fails after 4,194,304 bytes processed
Observed results
-
sizeMB=1
→seen=1048576
(all bytes) →StreamRequestError: Error code undefined
-
sizeMB=10
→seen=4194304
(exactly 4 MB) → same error -
Stack points into the SDK (e.g.,
client.ts
around the write stream).
Workaround
I’ve implemented a feature flag:
-
OBJECT_STORAGE_STREAMING=false
→ write stream to/tmp
, then calluploadFromFilename
→ works -
OBJECT_STORAGE_STREAMING=true
→ calluploadFromStream
→ fails as above
Downside: temp files tie upload size to workspace disk and add extra I/O.
Ask
-
Is there a known limitation or required usage pattern for
uploadFromStream
(chunk sizes, headers,contentLength
handling)? -
Could the SDK be failing on multi-chunk streams (~4 MB boundary)?
Happy to run additional tests or share more logs if helpful. Thanks!