File Size Limits
How File Handling Works
TimeProof takes a fundamentally different approach to file handling than most cloud services. Your files are never uploaded to TimeProof’s servers. Instead, the SHA-256 hash is computed locally in your browser, and only that hash — a fixed-length 64-character string — is sent to the server.
This means the “file size limit” question works differently here. There’s no upload bandwidth constraint, no server storage limit, and no file size cap imposed by the API. The only practical limit is your device’s ability to read and hash the file.
Practical Size Guidelines
| File Size | Hashing Time | Experience |
|---|---|---|
| Up to 10 MB | Nearly instant | No noticeable delay |
| 10–100 MB | 1–3 seconds | Brief progress indicator |
| 100 MB–1 GB | 3–15 seconds | Progress bar visible |
| 1–5 GB | 15–60 seconds | Progress bar, wait for completion |
| 5 GB+ | 1–5 minutes | Depends on device; may stress older hardware |
These times are approximate and depend on your device’s CPU, available memory, and browser. Modern devices with adequate RAM handle multi-gigabyte files without issues.
Supported File Types
TimeProof works with any file type. The hashing algorithm operates on raw bytes — it doesn’t interpret or parse the file contents. Examples of files people timestamp:
| Category | File Types |
|---|---|
| Documents | PDF, DOCX, TXT, ODT, RTF |
| Images | JPEG, PNG, TIFF, RAW, HEIC, WebP |
| Video | MP4, MOV, AVI, MKV, ProRes |
| Audio | MP3, WAV, FLAC, AAC |
| Code | Source files, repositories (as ZIP), build artifacts |
| Design | PSD, AI, FIGMA exports, SVG |
| Engineering | CAD files, STEP, STL, DWG |
| Archives | ZIP, TAR, 7Z, RAR |
| Data | CSV, JSON, XML, databases, spreadsheets |
| Legal | Contracts, NDAs, agreements (any format) |
If your computer can read the file, TimeProof can timestamp it.
What Gets Sent to TimeProof
To be completely clear about what data leaves your machine:
| Data | Sent to server? |
|---|---|
| File contents | No — never uploaded |
| File hash (SHA-256) | Yes — 64 characters |
| File name | Yes — for display purposes |
| File size | Yes — for metadata |
| File type/extension | No — inferred from name |
The hash is a one-way function. It’s mathematically impossible to reconstruct your file from its hash. Even if someone obtained the hash, they would learn nothing about the file’s contents.
Browser Memory Considerations
Because hashing happens in the browser, very large files consume browser memory temporarily. Here’s how to handle large files:
Tips for Large Files
- Close unnecessary tabs — free up browser memory before hashing large files
- Use a modern browser — Chrome and Edge handle large files better than some alternatives
- One file at a time — if hashing multiple large files, process them sequentially rather than all at once
- Monitor the progress bar — the UI shows hashing progress so you know it’s working
If Hashing Fails
If a very large file causes the browser to run out of memory:
- Try a different browser (Chrome tends to handle large allocations well)
- Close other applications to free system memory
- If the file is extremely large (10 GB+), consider using the API with a local hashing tool instead of the browser interface
Local Hashing Alternative
For very large files, you can compute the hash locally and submit it via the API:
# macOS / Linux
sha256sum largefile.bin
# Windows PowerShell
Get-FileHash largefile.bin -Algorithm SHA256
# Python
python -c "
import hashlib
with open('largefile.bin', 'rb') as f:
h = hashlib.sha256()
while chunk := f.read(8192):
h.update(chunk)
print(h.hexdigest())
"
Then submit the hash via the API’s POST /api/timestamps endpoint with the file hash, name, and size.
Batch File Considerations
When timestamping multiple files in a batch:
- Each file is hashed individually
- All hashes are combined into a Merkle tree
- The Merkle root is what gets anchored on-chain
- Large batches (hundreds of files) work fine — the Merkle tree construction is efficient
The total batch size doesn’t affect the blockchain cost — whether you batch 2 files or 200, the on-chain transaction is the same size (one Merkle root).
API Constraints
While the file itself isn’t uploaded, the API does have some practical limits on the request payload:
| Constraint | Limit |
|---|---|
| Request body size | Standard HTTP limits |
| Files per request | Reasonable batch sizes |
| Hash format | 64-character lowercase hex (SHA-256) |
| Filename length | Standard filesystem limits |
These are API-level constraints on the metadata, not on the actual files.
Related Guides
- Creating Your First Timestamp — step-by-step timestamp creation
- Privacy and Your Files — how your data is protected
- Batch Timestamping — efficiently timestamping many files
- Troubleshooting Common Issues — fixing common problems
Use the live product for timestamping and verification.
The company site owns the technical reference. The app handles runtime workflows.