Documentation
Everything you need to know about FluxUpload - from installation to advanced usage.
Installation
Install FluxUpload using npm, yarn, or pnpm:
npm install fluxupload
yarn add fluxupload
pnpm add fluxupload
Requirements:
- Node.js >= 12.0.0
- No additional dependencies!
Quick Start
Here's a minimal example to get you started:
const http = require('http');
const FluxUpload = require('fluxupload');
const { LocalStorage } = require('fluxupload');
const uploader = new FluxUpload({
storage: new LocalStorage({
destination: './uploads',
naming: 'uuid'
})
});
const server = http.createServer(async (req, res) => {
if (req.method === 'POST' && req.url === '/upload') {
try {
const result = await uploader.handle(req);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(result));
} catch (error) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: error.message }));
}
}
});
server.listen(3000, () => {
console.log('Server running on http://localhost:3000');
});
Configuration
FluxUpload accepts the following configuration options:
const uploader = new FluxUpload({
// Request limits
limits: {
fileSize: 10 * 1024 * 1024, // 10MB per file
files: 10, // Max 10 files
fields: 20, // Max 20 fields
fieldSize: 1024 * 1024 // 1MB per field
},
// Validation plugins (run before storage)
validators: [
new QuotaLimiter({ maxFileSize: 10 * 1024 * 1024 }),
new MagicByteDetector({ allowed: ['image/*'] })
],
// Transform plugins (modify stream)
transformers: [
new StreamHasher({ algorithm: 'sha256' })
],
// Storage destination(s)
storage: new LocalStorage({ destination: './uploads' }),
// Or multiple storages:
// storage: [localStorage, s3Storage]
// Callbacks
onField: (name, value) => { },
onFile: (fileInfo, stream) => { },
onError: (error) => { },
onFinish: (result) => { }
});
Architecture
FluxUpload uses a micro-kernel architecture:
HTTP Request (multipart/form-data)
│
▼
┌───────────────────┐
│ MultipartParser │ RFC 7578 compliant parser
└─────────┬─────────┘
│
▼
┌───────────────────┐
│ Validators │ QuotaLimiter, MagicByteDetector, etc.
└─────────┬─────────┘
│
▼
┌───────────────────┐
│ Transformers │ StreamHasher, StreamCompressor, etc.
└─────────┬─────────┘
│
▼
┌───────────────────┐
│ Storage │ LocalStorage, S3Storage, etc.
└───────────────────┘
Key principles:
- Stream-first: Data flows through pipes, never fully buffered
- Plugin-based: All functionality is modular and composable
- Error recovery: Automatic cleanup on failures
QuotaLimiter
Enforces file size limits with immediate stream abort:
const { QuotaLimiter } = require('fluxupload');
new QuotaLimiter({
maxFileSize: 50 * 1024 * 1024, // 50MB per file
maxTotalSize: 200 * 1024 * 1024 // 200MB total per request
})
Why use QuotaLimiter?
- Immediately aborts when limit is exceeded (saves bandwidth)
- Prevents DDOS attacks via large file uploads
- Tracks total size across all files in a request
MagicByteDetector
Verifies file types by checking binary signatures (magic bytes):
const { MagicByteDetector } = require('fluxupload');
new MagicByteDetector({
allowed: ['image/jpeg', 'image/png', 'image/gif'],
// Or use wildcards:
// allowed: ['image/*', 'application/pdf']
})
Security: Never trust file extensions or Content-Type headers. A malicious user can rename malware.exe to photo.jpg. MagicByteDetector checks the actual binary content.
LocalStorage
Store files on the local filesystem with atomic writes:
const { LocalStorage } = require('fluxupload');
new LocalStorage({
destination: './uploads',
naming: 'uuid', // 'uuid', 'original', 'timestamp', 'hash', 'slugify'
createDirectories: true,
fileMode: 0o644,
dirMode: 0o755
})
Atomic Writes: Files are written to a temp file first, then renamed. This prevents partial files from being visible.
S3Storage
Upload directly to AWS S3 or S3-compatible services:
const { S3Storage } = require('fluxupload');
new S3Storage({
bucket: 'my-bucket',
region: 'us-east-1',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
prefix: 'uploads/',
acl: 'private',
storageClass: 'STANDARD'
})
S3-Compatible Services:
// MinIO
new S3Storage({
bucket: 'my-bucket',
region: 'us-east-1',
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
endpoint: 'http://localhost:9000'
})
// DigitalOcean Spaces
new S3Storage({
bucket: 'my-space',
region: 'nyc3',
accessKeyId: process.env.DO_ACCESS_KEY,
secretAccessKey: process.env.DO_SECRET_KEY,
endpoint: 'https://nyc3.digitaloceanspaces.com'
})
Observability
FluxUpload includes comprehensive monitoring built-in:
Structured Logging
const { getLogger } = require('fluxupload');
const logger = getLogger({
level: 'info',
format: 'json'
});
logger.info('Upload started', { fileCount: 3 });
Prometheus Metrics
const { getCollector } = require('fluxupload');
const metrics = getCollector();
// Expose metrics endpoint
app.get('/metrics', (req, res) => {
res.type('text/plain');
res.send(metrics.toPrometheus());
});
Health Checks
const { HealthCheck } = require('fluxupload');
const health = new HealthCheck();
health.registerStorageCheck('./uploads');
app.get('/health', async (req, res) => {
const status = await health.check();
res.json(status);
});
Custom Plugins
Create your own validators, transformers, or storage drivers:
const { Plugin } = require('fluxupload');
class MyValidator extends Plugin {
constructor(options) {
super('MyValidator');
this.options = options;
}
async process(context) {
const { fileInfo, stream, metadata } = context;
// Validate file
if (!this.isValid(fileInfo)) {
throw new Error('Validation failed');
}
// Add metadata
context.metadata.validated = true;
return context;
}
async cleanup(context, error) {
// Called on error for cleanup
}
}
Use your plugin:
const uploader = new FluxUpload({
validators: [
new MyValidator({ /* options */ })
],
storage: new LocalStorage({ destination: './uploads' })
});
Need More Help?
Check out the complete API reference or explore working examples.