Are we about to enter the third era of the World Wide Web and secure file transfer? Many technology experts and journalists feel that we are about to enter a new era of data transfer built on HTTP/3. As more server operating systems support the protocol, data and content providers are beginning to take advantage of these new tools to build more decentralized architectures.
In this post, we’ll discuss a brief overview of the current web and the changes on the horizon to help you understand how to prepare your file transfer solutions for change.
A brief history of the internet
The first era of the web, known as Web 1.0, spans the time period from the initial deployment of TCP/IP networks and the HTTP protocol in 1989 to the early 2000s, when the internet achieved widespread adoption. This era was defined by static web pages and slow network speeds as organizations worked to figure out exactly how to make online data transfer more efficient and the first commercial internet service providers built better and better offerings. File transfer protocols and applications at the time operated in much the same fashion as they do today, although the UIs were much less refined and security was much looser (if it was used at all).
The Web 2.0 era spans roughly the time period from the mid-2000s to the early 2020s and can be thought of as the era of dynamic content transfers and improved security. Most organizations moved as much of their operational and customer-facing activities onto the web as possible, which led to improved co-creation tools and more refined user experiences based on the personalization and interactivity that was now possible. Security became a top concern as well, and the widespread adoption of SSH, HTML5 and HTTPS helped to secure the client-server connection.
All of which brings us to today.
What is “Web 3.0”?
Significant debate exists around whether “Web 3.0” will be a real shift in how organizations and individuals use the internet, or if the term is just marketing hype. For our purposes, we’re defining Web 3.0 as the widespread adoption of several iterations of the protocols and languages that support the internet (which is in process as we write this):
- HTTP/3: HTTP/3 iterates on the HTTP protocol by shifting its transport protocol from TCP to QUIC. This change allows HTTP/3 to transfer data much more quickly due to QUIC’s multiplex transfer capabilities.
- TLS 1.3: TLS provides the transfer security for HTTP-based communications, and version 1.3 launched in 2018. This protocol adds additional ciphers and algorithms for authentication, certificate signature and key generation, while also improving a number of other security operations and retiring outdated encryption methods. TLS 1.3 has gained significant adoption, primarily due to its speed improvements.
- SSH3: Based on the new capabilities enabled by HTTP/3 and TLS 1.3, researchers are currently working on SSH3, with the goals of speeding up data transfer, offering more flexible authentication methods and improving port security.
What will Web 3.0 file transfer look like?
Much of the discussion of Web 3.0 centers around the decentralized aspect of data transmission, with enthusiasts highlighting the fact that security and privacy of information will be built in to the structure of the new internet. Most system administrators have been using secured file transfer applications for years now, so this “revolutionary” change will, in actuality, just provide us with more tools that operate more efficiently.
Some of these tools will include:
- Blockchain-based file and data tracking to ensure integrity
- Decentralized storage and more peer-to-peer sharing applications that may eliminate the need for a file server
If you are a corporate system administrator, the above tools may not be of great interest to you (and may represent the opposite of your data security protocols).
However, there is one aspect of Web 3.0 that is already gaining significant momentum: the QUIC transfer protocol.
How will QUIC affect file transfer?
QUIC’s primary advantage for file transfer is speed. TCP-based file transfers slow when the protocol detects packet loss. When a transmission error occurs, all data streams on the connection are paused until the protocol resolves the error.
By operating on UDP, QUIC takes advantage of several UDP features that improve transfer speeds:
- Frontloading key exchange and protocol information into the initial handshake, which eliminates the amount of connection information future packets must carry
- Multiplexing data streams that do not wait for all packets to be verified before transmission
- Adding a connection identifier to allow transfers to continue when a network switches (e.g., a session that continues on a mobile device)
Other Considerations for “Web 3.0” file transfer
QUIC and UDP transfers are already widely used by major companies like Facebook, Google and Apple, but less so for the typical organization’s web infrastructure. This means that adoption for consumer-level applications and content-delivery networks is high, but will take time on the commercial business operations side.
Many servers supporting the internet’s infrastructure do not yet allow UDP-based connections. While QUIC does have support for a fallback to TCP, you will likely need to experiment with your transfers to ensure compatibility with all systems and network paths.
The IETF is developing a number of additional applications built on QUIC, but these are not yet in widespread use.
We hope that the above information has given you a window into what “Web 3” file transfer will look like. If you’re interested in trying UDP-based file transfer, JSCAPE by Redwood offers a flexible managed file transfer experience with this capability.