What is the recommended approach in Jenkins for uploading files larger than 50 MB via build parameters?

Summary

A Jenkins pipeline failed to upload files larger than 50 MB using standard build parameters or the File Parameter type, causing the build to hang or fail without the file appearing in the workspace. The root cause is HTTP request size limits in the Jenkins controller (Reverse Proxy and Jetty) and the inefficiency of streaming large binary data through the UI. The recommended approach is to bypass the UI upload entirely by using artifact staging (archiving the file from an external location within the pipeline) or external storage references (S3/Artifactory) rather than uploading via the “Build with Parameters” form.

Root Cause

The failure stems from configuration limits at three layers of the Jenkins request lifecycle:

  • Reverse Proxy Limits (Nginx/Apache): Most Jenkins instances sit behind a proxy. These proxies default to client body size limits (often 1MB to 10MB) to prevent DoS attacks. A 50MB upload exceeds this, resulting in an immediate 413 Request Entity Too Large or a timeout.
  • Jenkins Jetty Limits: Even if the proxy allows it, the embedded Jetty container has its own requestHeaderSize and maxFormContentSize. Standard Jenkins WAR distributions often default to ~10MB for form content.
  • File Parameter Limitations: The legacy File Parameter is deprecated and notoriously unstable for large files. It attempts to stream data into memory or a temporary buffer before writing to the workspace, often causing OutOfMemoryError or connection timeouts before the file is persisted.
  • UI Blocking: The browser connection remains open during the upload. If any layer times out, the pipeline never triggers, and the file is never written to disk.

Why This Happens in Real Systems

Jenkins is architecturally designed for Source Code Management (SCM), not Binary Large Object (BLOB) storage.

  • Statelessness: Build nodes are ephemeral. Jenkins relies on the workspace being populated from SCM or small inputs.
  • Synchronous I/O: The UI upload is a synchronous HTTP request. Large files block the web server thread.
  • Configuration Drift: DevOps teams often copy Nginx configs from small web apps without adjusting client_max_body_size, creating a hard ceiling on inputs.

Real-World Impact

  • Pipeline Deadlock: The UI hangs indefinitely, requiring a browser refresh or manual build termination.
  • Developer Friction: Engineers cannot manually trigger builds with large configuration files, breaking “Build with Parameters” workflows.
  • Resource Exhaustion: If the upload barely succeeds, it can spike controller CPU/Memory usage, slowing down other concurrent builds.
  • Incompatibility with CI/CD: Manual file uploads break the automation paradigm, forcing users to bypass the pipeline by manually moving files to agents.

Example or Code

Do not attempt to upload large files via code inside the UI. Instead, configure the Jenkins Controller to accept larger payloads if you must use the UI, or use the recommended pipeline approach below.

Configuration Change (Jenkins Controller / Reverse Proxy):

If you absolutely must use the UI, you must update conf.xml or your proxy config.

Example Nginx snippet:

server {
    listen 443 ssl;
    server_name jenkins.example.com;

    # CRITICAL: Increase this to > 50MB
    client_max_body_size 50M; 

    # Increase timeout for slow uploads
    proxy_read_timeout 300s;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

How Senior Engineers Fix It

Senior engineers avoid the UI upload problem entirely by moving the “Upload” action into the pipeline logic itself.

The Recommended Approach: Staged Artifacts

  1. Store the file externally: Upload the large file to a temporary location accessible by the build agent (S3, Artifactory, Nexus, or a shared network drive).
  2. Pass a Reference, not the File: Trigger the Jenkins build with a String Parameter containing the path or URL of the file.
  3. Download inside the Pipeline: Use standard shell tools (curl, wget, aws cli) within a sh step to download the file directly to the workspace.
  4. Use as Needed: The file is now in the workspace, verified, and ready for processing.

Jenkinsfile Example (Recommended):

pipeline {
    agent any
    parameters {
        // Only pass the URL or Path, not the binary data
        string(name: 'FILE_URL', defaultValue: '', description: 'URL of the large file to process')
    }
    stages {
        stage('Get Large File') {
            steps {
                script {
                    if (params.FILE_URL) {
                        // Use curl to download directly to workspace
                        // This bypasses Jenkins HTTP limits entirely
                        sh """
                        curl -L -o large_file.bin '${params.FILE_URL}'
                        ls -lh large_file.bin
                        """
                    }
                }
            }
        }
        stage('Process') {
            steps {
                // Use the file in the workspace
                sh 'cat large_file.bin | grep "something"'
            }
        }
    }
}

Why Juniors Miss It

  • Expectation of File System Behavior: Juniors expect cloud-like file uploads (like Gmail attachments) and assume Jenkins handles binary streams robustly.
  • Ignoring Infrastructure Layers: They look for code solutions when the blockage is in the Nginx/Proxy configuration.
  • Over-reliance on UI: They try to “Build with Parameters” and upload directly, not realizing the UI is designed for small text inputs (keys, flags, names), not data.
  • Misunderstanding “File Parameter”: They see “File Parameter” in the job config and assume it supports enterprise-sized files, not knowing it is a legacy feature with strict size constraints.