Nov 29, 2023 EDIT: w3name is no longer being sunset January 9, 2024 given user feedback. This post has been modified to reflect this.
The web3.storage team is excited to share that we have launched our new upload API! We accomplished a ton with our old API from initially launching as a free Filecoin data onboarding platform for developers. However, we are evolving into a standalone business to open the potential of decentralized data protocols to be used by developers everywhere, and that means pivoting to a product that takes what is so great about the old web3.storage (easy-to-use, reliable and fast IPFS hosting, with data stored on the decentralized Filecoin network) and combines it with key features and a differentiated developer experience.
This includes, among other things:
- Seamless uploads of any size
- Lower prices competitive with traditional cloud storage providers
- The ability to give other users the ability to upload directly to web3.storage on your behalf
- web3-native application models using cryptographic, UCAN-based authorization
Read on to hear more about how it could bring game-changing tech to your application!
w3up: Our new upload API
web3.storage was launched over 2 years ago in August 2021 as a free product to showcase the abilities of IPFS and Filecoin to provide cheap, content addressed storage. Though it initially started as a product focused on simplifying the developer experience to use Filecoin, we quickly saw growth from users that needed a decentralized but performant storage solution.
We adjusted the old API based on user feedback, we realized that there was only so far we could fulfill their needs before needing to re-architect the entire API and clients. As a result, we built a new upload API called w3up.
w3up replaces the old web3.storage upload API. You can start using w3up today in just a few minutes!
Users can sign up for one of three plans. You can do so by following the Quickstart instructions in our docs.
- Starter: Free to use for the first 5GB of storage
- Lite: $10/mo for 100GB storage ($0.05/GB-mo for extra storage), 100GB egress/mo. (extra egress for $0.05/GB)
- Business: $100/mo for 2TB storage (extra storage for $0.03/GB-mo), 2TB egress/mo. (extra egress for $0.03/GB)
All users must enter payment information, even when signing up for the Starter tier. We recognize that this might add inconvenience for you, but this is a necessary measure to prevent users from uploading malicious content.
The new product opens a ton of doors for developers to explore using decentralized protocols like IPFS and UCAN to create interesting and improved experiences for their end users. To learn more about this, check out this blog post, our docs, and our CLI.
Even if you’re not a developer, our web console makes it easy to host even large files and directories over IPFS.
Unlocking the Data Layer with w3up
We like to say that the new web3.storage helps unlock your “data layer.” What does this mean?
The Data Layer refers to data itself, independent of where it is physically stored. It is fluid and flexible, where any entity on the web - end-users, applications, infrastructure, organizations, and more - can interact with it (store it, read it, process it, send it, and more) without having to worry about whether the data they care about is controlled by someone else.
Overview of key protocols
The Data Layer is fundamentally enabled by three decentralized protocols:
IPFS: Ability to reference data using a unique identifier specific to that data (a “content identifier,” or CID). The Data Layer references everything using IPFS CIDs.
- Any actor on the web can address the specific piece of data they are intending to using the CID, a cryptographic hash of the data.
- They can read the data independent of where it physically is as long as it’s on the network (from cloud and decentralized storage, to peers in the network, to locally).
- Data can internally reference other CIDs, creating “pointers” to other data that roll up into the overall CID without needing the data itself (a complementary protocol called IPLD).
DID: A cryptographic identifier. The types of DID we use allow the holder to prove they have control over that identity without a centralized source of truth. Every actor that needs an identity and is interacting with the Data Layer should have a DID.
UCAN: An authorization mechanism building on top of DIDs where “everything a user is allowed to do is captured directly” in a token. UCAN is a powerful auth mechanism that really magnifies the power of the Data Layer.
- The token contains verifiable cryptographic signatures validating that the true DID holder signed it.
- It is sent to authenticated APIs so the service can verify the token without checking any internal or external source-of-truth.
- A DID holder can also delegate any permissions they have to other DIDs so they can directly interact with authenticated services on the holder’s behalf.
Benefits of the data layer
So there’s all these fancy data and identity protocols that use cryptography that the Data Layer is built on top of! But what does the Data Layer unlock, exactly?
- One powerful thing is verifiability.
Because everything that interacts with the Data Layer is referencing data using IPFS CIDs, actors have guarantees that the data they receive is really what they’re looking for without needing to trust others in the network. Because any authenticated interaction uses UCAN, actors can verify themselves that those making requests have permission to do so.
- Another is openness.
IPFS allows anyone plugged into the network to get data with a corresponding CID - there are no gatekeepers or silos in the Data Layer. In permissioned interactions, there is no central authority determining who has permissions to do what since UCAN validation is self-contained. And even with this openness, things like private data can be secured via encryption. Since every actor has an identity based on cryptography, user flows can be designed to enable private data use cases.
- This leads to composability, which unlocks improved efficiency and speed.
Data can be referenced and linked simply by using CIDs (hash-linked). Rather than users and applications needing to include the entirety of the data in a payload, they can just reference the CID of the data itself. Blocks of data that are the same and share a CID no longer have to be stored multiple times just because they are in two different data sets. Further, CIDs are infinitely cacheable, since CIDs are unique to their underlying data, so only one copy of the data needs to be kept for popular content.
- Verifiability, composability, and composability create portability.
Because data is referenced using IPFS CIDs, you can easily move hosted storage providers across the Data Layer, reducing vendor lock-in. Just start writing to the new one - since you are referencing data by what it is, not where it’s stored (i.e., HTTP URLs), everything should continue to work seamlessly.
- Providers using DIDs create a user-centric experience.
Anything that plugs into the Data Layer has a DID, from end-users, to applications, to accounts within services, to services themselves. Any actor can take who they are and their data to any interaction with anyone else on the web. Companies no longer control a user’s identity or data that’s important to them. Because a user locally generates an account DID, they are in charge of it - companies merely execute UCAN invocations, acting on the user’s behalf.
- Serverless application structures and data flows are enabled due to content addressing and DIDs, meaning anyone that can interact directly with any other part of the Data Layer.
Self-sovereign identity means that a “user” is no longer defined by a service’s backend server. UCAN tokens can be delegated to actors to interact directly with permissioned services; data does not have to flow through backend servers as a proxy. With content addressing, storage can be on-device, cloud, and CDN - all without the developer leaving the web browser.
These advantages apply to anyone on the web, regardless of whether you’re used to traditional application-server models and APIs, or on the forefront of web3.
For previous web3.storage users
If you were a previous web3.storage user, there is some important information on how you can continue using web3.storage. Here is the tl;dr.
- The new upload API and client is live. Its account system is completely separate from that of the legacy web3.storage.
- The legacy web3.storage upload API and pinning service API will sunset on January 9, 2024. Until then, you will still be able to upload to your legacy account up to your free account limit.
- All uploads to the legacy web3.storage prior to January 9, 2024 will continue to be stored until further notice, though over time latency and availability may degrade.
- To continue maintaining the current level of availability of your stored data, you will be able to migrate your uploads to a new web3.storage account. Reach out to email@example.com if you’d like to be notified once the migration tool is available!
- Effective immediately, no new accounts can be created for the old web3.storage API.
- We will continue to enforce the legacy web3.storage Terms of Service for legacy web3.storage accounts. Those who violate the Terms of Service risk an account ban and removal of uploads.
Join us in building the new generation of the web!
With user-centric, verifiable protocols at the helm, there’s a new generation of applications just waiting to be built. The new web3.storage introduces this new technology, and we are hoping to expose this power to developers while making the experience working with it as seamless as possible.
Are you going to be in Istanbul for Devconnect or ETH Global? Join us at IPFS Devconnect on Thursday, November 16 for a talk and workshop on the new web3.storage. Otherwise, we’re always around in the #web-storage channel on IPFS Discord to chat or answer any questions!