File Size Limits

How File Handling Works

TimeProof takes a fundamentally different approach to file handling than most cloud services. Your files are never uploaded to TimeProof’s servers. Instead, the SHA-256 hash is computed locally in your browser, and only that hash — a fixed-length 64-character string — is sent to the server.

This means the “file size limit” question works differently here. There’s no upload bandwidth constraint, no server storage limit, and no file size cap imposed by the API. The only practical limit is your device’s ability to read and hash the file.

Practical Size Guidelines

File SizeHashing TimeExperience
Up to 10 MBNearly instantNo noticeable delay
10–100 MB1–3 secondsBrief progress indicator
100 MB–1 GB3–15 secondsProgress bar visible
1–5 GB15–60 secondsProgress bar, wait for completion
5 GB+1–5 minutesDepends on device; may stress older hardware

These times are approximate and depend on your device’s CPU, available memory, and browser. Modern devices with adequate RAM handle multi-gigabyte files without issues.

Supported File Types

TimeProof works with any file type. The hashing algorithm operates on raw bytes — it doesn’t interpret or parse the file contents. Examples of files people timestamp:

CategoryFile Types
DocumentsPDF, DOCX, TXT, ODT, RTF
ImagesJPEG, PNG, TIFF, RAW, HEIC, WebP
VideoMP4, MOV, AVI, MKV, ProRes
AudioMP3, WAV, FLAC, AAC
CodeSource files, repositories (as ZIP), build artifacts
DesignPSD, AI, FIGMA exports, SVG
EngineeringCAD files, STEP, STL, DWG
ArchivesZIP, TAR, 7Z, RAR
DataCSV, JSON, XML, databases, spreadsheets
LegalContracts, NDAs, agreements (any format)

If your computer can read the file, TimeProof can timestamp it.

What Gets Sent to TimeProof

To be completely clear about what data leaves your machine:

DataSent to server?
File contentsNo — never uploaded
File hash (SHA-256)Yes — 64 characters
File nameYes — for display purposes
File sizeYes — for metadata
File type/extensionNo — inferred from name

The hash is a one-way function. It’s mathematically impossible to reconstruct your file from its hash. Even if someone obtained the hash, they would learn nothing about the file’s contents.

Browser Memory Considerations

Because hashing happens in the browser, very large files consume browser memory temporarily. Here’s how to handle large files:

Tips for Large Files

  1. Close unnecessary tabs — free up browser memory before hashing large files
  2. Use a modern browser — Chrome and Edge handle large files better than some alternatives
  3. One file at a time — if hashing multiple large files, process them sequentially rather than all at once
  4. Monitor the progress bar — the UI shows hashing progress so you know it’s working

If Hashing Fails

If a very large file causes the browser to run out of memory:

  • Try a different browser (Chrome tends to handle large allocations well)
  • Close other applications to free system memory
  • If the file is extremely large (10 GB+), consider using the API with a local hashing tool instead of the browser interface

Local Hashing Alternative

For very large files, you can compute the hash locally and submit it via the API:

# macOS / Linux
sha256sum largefile.bin

# Windows PowerShell
Get-FileHash largefile.bin -Algorithm SHA256

# Python
python -c "
import hashlib
with open('largefile.bin', 'rb') as f:
    h = hashlib.sha256()
    while chunk := f.read(8192):
        h.update(chunk)
    print(h.hexdigest())
"

Then submit the hash via the API’s POST /api/timestamps endpoint with the file hash, name, and size.

Batch File Considerations

When timestamping multiple files in a batch:

  • Each file is hashed individually
  • All hashes are combined into a Merkle tree
  • The Merkle root is what gets anchored on-chain
  • Large batches (hundreds of files) work fine — the Merkle tree construction is efficient

The total batch size doesn’t affect the blockchain cost — whether you batch 2 files or 200, the on-chain transaction is the same size (one Merkle root).

API Constraints

While the file itself isn’t uploaded, the API does have some practical limits on the request payload:

ConstraintLimit
Request body sizeStandard HTTP limits
Files per requestReasonable batch sizes
Hash format64-character lowercase hex (SHA-256)
Filename lengthStandard filesystem limits

These are API-level constraints on the metadata, not on the actual files.

Use the live product for timestamping and verification.

The company site owns the technical reference. The app handles runtime workflows.