Other alternatives:
https://github.com/deuxfleurs-org/garage
https://github.com/rustfs/rustfs
https://github.com/seaweedfs/seaweedfs
https://github.com/supabase/storage
https://github.com/scality/cloudserver
Among others
https:/github.com/beep-industries/content
Although the doc only mentions CORS [1] on an "exposing website page" which is not exactly related; also the mention strongly suggests using reverse proxy for CORS [1] which is an overkill and perhaps not needed if supported natively?
Also googling the question only points out to the same reverse proxy page.
Now that I know about PutBucketCORS it's perfectly clear but perhaps it's not easily discoverable.
I am willing to write a cookbook article on signed browser uploads once I figure out all the details.
I'm using seaweedfs for a single-machine S3 compatible storage, and it works great. Though I'm missing out on a lot of administrative nice-to-haves (like, easy access controls and a good understanding of capacity vs usage, error rates and so on... this could be a pebcak issue though).
Ceph I have also used and seems to care a lot more about being distributed. If you have less than 4 hosts for storage it feels like it scoffs at you when setting up. I was also unable to get it to perform amazingly, though to be fair I was doing it via K8S/Rook atop the Flannel CNI, which is an easy to use CNI for toy deployments, not performance critical systems - so that could be my bad. I would trust a ceph deployment with data integrity though, it just gives me that feel of "whomever worked on this, really understood distributed systems".. but, I can't put that feeling into any concrete data.
Overall great philosophy (target at self-hosting / independence) and clear and easy maintenance, not doing anything fancy, easy to understand architecture and design / operation instructions.
I expect rugpull in the future
This was written to store many thousands of images for machine learning
I too think it would be great with a simple project that can serve S3 from filesystem, for local deployments that doesn't need balls to the walls performance.
[0]: https://github.com/seaweedfs/seaweedfs/wiki/Quick-Start-with...
Yes I'm looking for exactly that and unfortunately haven't found a solution.
Tried garage, but they require running a proxy for CORS, which makes signed browser uploads a practical impossibility for the development machine. I had no idea that such a simple popular scenario is in fact too exotic.
A simple litmus test I like to do with S3 storages is to create two objects, one called "foo" and one called "foo/bar". If the S3 uses a filesystem as backend, only the first of those can be created
They've been active in the Ceph community for a long time.
I don't know any specifics, but I'm pretty sure their Ceph installation is pretty big and used to support critical data.
https://imgur.com/a/WN2Mr1z (UK: https://files.catbox.moe/m0lxbr.png)
I clicked settings, this appeared, clicking away hid it but now I cant see any setting for it.
The nasty way of reading that popup, my first way of reading it, was that filestash sends crash reports and usage data, and I have the option to have it not be shared with third parties, but that it is always sent, and it defaults to sharing with third parties. The OK is always consenting to share crash reports and usage.
I'm not sure if it's actually operating that way, but if it's not the language should probably be
Help make this software better by sending crash reports and anonymous usage statistics.
Your data is never shared with a third party.
[ ] Send crash reports & anonymous usage data.
[ OK ]update: done => https://github.com/mickael-kerjean/filestash/commit/d3380713...
rustfs have promise, supports a lot of features, even allows to bring your own secret/access keys (if you want to migrate without changing creds on clients) but it's very much still in-development; and they have already prepared for bait-and-switch in code ( https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens... )
Ceph is closest feature wise to actual S3 feature-set wise but it's a lot to setup. It pretty much wants few local servers, you can replicate to another site but each site on its own is pretty latency sensitive between storage servers. It also offers many other features aside, as S3 is just built on top of their object store that can be also used for VM storage or even FUSE-compatible FS
Garage is great but it is very much "just to store stuff", it lacks features on both S3 side (S3 have a bunch of advanced ACLs many of the alternatives don't support, and stuff for HTTP headers too) and management side (stuff like "allow access key to access only certain path on the bucket is impossible for example). Also the clustering feature is very WAN-aware, unlike ceph where you pretty much have to have all your storage servers in same rack if you want a single site to have replication.
I run Ceph at work. We have some clusters spanning 20 racks in a network fabric that has over 100 racks.
In a typical Leaf-Spine network architecture, you can easily have sub 100 microsecond network latency which would translate to sub millisecond Ceph latencies.
We have one site that is Leaf-Spine-SuperSpine, and the difference in network latency is barely measurable between machines in the same network pod and between different network pods.
There's also a CLA with full copyright assignment, so yeah, I'd steer clear of that one: https://github.com/rustfs/rustfs/blob/main/CLA.md
1. All filenames are read. 2. All filenames are sorted. 3. Pagination applied.
It doesn't scale obviously, but works ok-ish for a smaller data set. It is difficult to do this efficiently without introducing complexity. My applications don't use listing, so I prioritised simplicity over performance for the list operation.
The frustrating part isn't the business decision itself. It's that every pivot creates a massive migration burden on teams who bet on the "open" part. When your object storage layer suddenly needs replacing, that's not a weekend project. You're looking at weeks of testing, data migration, updating every service that touches S3-compatible APIs, and hoping nothing breaks in production.
For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.
Community-governed projects under foundations (Ceph under Linux Foundation, for example) tend to be more durable even if they're harder to set up initially. The operational complexity of Ceph vs MinIO was always the tradeoff - but at least you're not going to wake up one morning to a "THIS REPOSITORY IS NO LONGER MAINTAINED" commit.
While I loath the moves to closed source you also can't fault them the hyperscalers just outcompete them with their own software.
An alternative I've seen is "the code is proprietary for 1 year after it was written, after that it's MIT/GPL/etc.", which keeps the code entirely free(ish) but still prevents many businesses from getting rich off your product and leaving you in the dust.
You could also go for AGPL, which is to companies like Google like garlic is to vampires. That would hurt any open core style business you might want to build out of your project though, unless you don't accept external contributions.
Also, I'm not sure how anathema AGPL is. It's true I rarely see AGPL projects being hosted by big clouds, but AGPL is also just less popular as a license. I know AWS hosts AGPL Grafana, but iirc, they had to work out some deal with upstream.
From my experience, Ceph works well, but requires a lot more hardware and dedicated cluster monitoring versus something like more simple like Minio; in my eyes, they have a somewhat different target audience. I can throw Minio into some customer environments as a convenient add-on, which I don't think I could do with Ceph.
Hopefully one of the open-source alternatives to Minio will step in and fill that "lighter" object storage gap.
I struggle to even find example of VC-backed OSS that didn't go "ok closing down time". Only ones I remember (like Gitlab) started with open core model, not fully OSS
Redis is the odd one out here[1]: Garantia Data, later known as Redis Labs, now known as Redis, did not create Redis, nor did it maintain Redis for most of its rise to popularity (2009–2015) nor did it employ Redis’s creator and then-maintainer 'antirez at that time. (He objected; they hired him; some years later he left; then he returned. He is apparently OK with how things ended up.) What the company did do is develop OSS Redis addons, then pull the rug on them while saying that Redis proper would “always remain BSD”[2], then prove that that was a lie too[3]. As well as do various other shady (if legal) stuff with the trademarks[4] and credits[5] too.
[1] https://www.gomomento.com/blog/rip-redis-how-garantia-data-p...
[2] https://redis.io/blog/redis-license-bsd-will-remain-bsd/
[3] https://lwn.net/Articles/966133/
On AWS S3, you have a storage level called "Infrequent Access", shortened IA everywhere.
A few weeks ago I had to spend way too much time explaining to a customer that, no, we weren't planning on feeding their data to an AI when, on my reports, I was talking about relying on S3 IA to reduce costs...
Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun. I learnt this the hard way and I guess the MinIO team learnt this as well.
Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision. Or, if you haven't done that, do the right thing and continue to stand by what you originally promised.
When you start something (startup, FOSS project, damn even marriage) you might start with the best intentions and then you can learn/change/loose interest. I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Turning it around, any company that adopts a FOSS project should be honest and pay for something if it does not accept the idea that at some point the project will change course (which obviously, does not guarantee much, because even if you pay for something they can decide to shut it down).
Obviously you cannot "demand" stuff but you can do your due dilligence as the person who chooses a technical solution. Some projects have more clarity than others, for example the Linux foundation or CNCF are basically companies sharing costs for stuff they all benefit from like Linux or Prometheus monitoring and it is highly unlikely they'd do a rug pull.
On the other end of the spectrum there are companies with a "free" version of a paid product and the incentive to make the free product crappier so that people pay for the paid version. These should be avoided.
"An informed decision" is not a black or white category, and it definitely isn't when we're talking about risk pricing for B2B services and goods, like what MinIO largely was for those who paid.
Any business with financial modelling worth their salt knows that very few things which are good and free today will stay that way tomorrow. The leadership of a firm you transact with may or may not state this in words, but there are many other ways to infer the likelihood of this covertly by paying close attention.
And, if you're not paying close attention, it's probably just not that important to your own product. What risks you consider worth tailing are a direct extension of how you view the world. The primary selling point of MinIO for many businesses was, "it's cheaper than AWS for our needs". That's probably still true for many businesses and so there's money to be made at least in the short term.
Like with software development, we often lack the information on which we have to decide architectural, technical or business decisions.
The common solution for that is to embrace this. Defer decisions. Make changing easy once you do receive the information. And build "getting information" into the fabric. We call this "Agile", "Lean", "data driven" and so on.
I think this applies here too.
Very big chance that MinIO team honestly thought that they'd keep it open source but only now gathered enough "information" to make this "informed decision".
FOSS is not a moral contract. People working for free owe nothing to no one. You got what's on the tin - the code is as open source once they stop as when they started.
The underlying assumption of your message is that you are somehow entitled to their continued labour which is absolutely not the case.
Free users certainly would like it to be a social contract like I would like to be gifted a million dollars. Sadly, I still have to work and can't infinitely rely on the generosity of others.
Free software developers are gifting you something. Expecting indefinite free work is not mutual respect. That's entitlement.
The common is still there. You have the code. Open source is not a perpetual service agreement. It is not indentured servitude to the community.
Stop trying to guilt trip people into giving you free work.
If the software developer doesn't return your cart, he betrayed the social contract.
This sounds very manipulative and narcissistic.
Maybe this is the case, but why is your presumption of entitlement to free labor of others the assumed social contract, the assumed "moral" position, rather than the immoral one?
Why is the assumed social contract that is unwritten not that you can have the free labor we've released to you so far, but we owe you nothing in the future?
There's too much assumption of the premise that "moral" and "social contract" are terms that make the entitled demands of free-loaders the good guys in this debate. Maybe the better "morality" is the selfless workers giving away the product of their labor for free are the actual good guys.
Why do you presume to think your definition of morals is shared by everyone? Why is entitlement to others labor the moral position, instead of the immoral position?
Nobody sensible is upset when a true FOSS “working for free” person hangs up their boots and calls it quits.
The issue here is that these are commercial products that abuse the FOSS ideals to run a bait and switch.
They look like they are open source in their growth phase then they rug pull when people start to depend on their underlying technology.
The company still exists and still makes money, but they stopped supporting their open source variant to try and push more people to pay, or they changed licenses to be more restrictive.
It has happened over and over, just look at Progress Chef, MongoDB, ElasticSearch, Redis, Terraform, etc.
Is it really though? They're replacing one product with another, and the replacement comes with a free version.
Nobody here is saying they should donate the last version of MinIO to the Apache software foundation under the Apache license. Nobody is arguing for a formalized "end of life" exit strategy for company oriented open source software or implying that such a strategy was promised and then betrayed.
The demand is always "keep doing work for me for free".
I’m saying that the open source rug pull is at this point a known business tactic that is essentially a psychological dark pattern used to exploit.
These companies know they’ll get more traction and sales if they have “open source” on their marketing material. They don’t/never actually intend to be open source long term. They expect to go closed source/source available business lines as soon as they’ve locked enough people into the ecosystem.
Open source maintainers/organizations like the GNU project are happy and enthusiastic about delivering their projects to “freeloaders.” They have a sincere belief that having source code freedom is beneficial to all involved. Even corporate project sponsors share this belief: Meta is happy to give away React because they know that ultimately makes their own products better and more competitive.
The core of my claim is that it’s a shady business tactic because the purpose of it is to gain all the marketing benefits of open source on the front-end (fast user growth, unpaid contributions from users, “street cred” and positive goodwill), then change to source available/business license after the end of the growth phase when users are locked in.
This is not much different than Southwest Airlines spending decades bragging about “bags fly free” and no fees only to pull the rug and dump their customer goodwill in the toilet.
Totally legal to do so, but it’s also totally legal for me to think that they’re dishonest scumbags.
Except in this case, software companies, in my opinion, have this rug pull plan in place from day 1.
I'd say it's redundant to consider any business tactic as "shady". The purpose of any business is to make a profit, in any way that's legally permissible. Using the "open source" label is just one way to success, if one plays the game well and mitigates any backlash once they "graduate" and change that license. It's up to any given user going in to be aware that a project they depend on may go in any direction, like it or not, and to always be ready to migrate if deemed necessary.
H-E-B (or just HEB) is a large, privately-held grocery chain in Texas, and they are beloved by Texans across the societal and political spectrum. They gained and keep this loyalty because they are good neighbors. In the aftermath of hurricanes or floods - of which Texas has many - HEB will be there before FEMA, with water tankers, mobile kitchens and pharmacies, power stations, and so on. They donate 5% of their earnings to local charities, food banks, and education.
It’s possible that HEB would make more money if they slashed these programs and raised prices, but I suspect that instead, people would be outraged at the rug pull, publicly shame them, and a competitor would swoop in and build out replacements.
When backed by a company there is an ethical obligation to keep, at least maintenance. Of course legally they can do what they wish. It isn't unfair to call it bad practice.
Ethics are not obligations, they are moral principles. Not having principles doesn't send you to prison that is why it isn't law. It makes you lose moral credit though.
Claiming that you’re entitled to free R&D forever because someone once gave you something of value seems like a great way to ensure that nobody does that again. You got over a decade of development by a skilled team, it’s not exactly beyond the pale that the business climate has changed since then.
However, almost every open source license actually DOES warn that support may end. See the warranty clause.
With open source it does. If an indie open sources and get a baby or lose interest, it is understood as fair to suddenly stop maintenance.
When a company surfs on the open source wave to get contributions, grow penetration, then smoothly slows maintenance and announces to get a license, that's gaming the open source community.
See the numerous cases of popular open source repo where the parent or new parent company took over to gain the user base without any respect for the maintenance if not development aspect: community fork and take over the community.
Mariadb, a more recent illustrative example of that is the hashicorp drama that occured when investors decided it was time to gear towards profit at the detriment of the community that largely contributed to the tools.
I use a few simple heuristics:
- Evaluate who contributes regularly to a project. The more diverse this group is, the better. If it's a handful of individuals from 1 company, see other points. This doesn't have to be a show stopper. If it's a bit niche and only a handful of people contribute, you might want to think about what happens when these people stop doing that (like is happening here).
- Look at required contributor agreements and license. A serious red flag here is if a single company can effectively decide to change the license at any point they want to. Major projects like Terraform, Redis, Elasticsearch (repeatedly), etc. have exercised that option. It can be very disruptive when that happens.
- Evaluate the license allows you do what you need to do. Licenses like the AGPLv3 (which min.io used here) can be problematic on that front and comes with restrictions that corporate legal departments generally don't like. In the end choosing to use software is a business decision you take. Just make sure you understand what you are getting into and that this is OK with your company and compatible with business goals.
- Permissive licenses (MIT, BSD, Apache, etc.) are popular with larger companies and widely used on Github. They facilitate a neutral ground for competitors to collaborate. One aspect you should be aware off is that the very feature that makes them popular also means that contributors can take the software and create modifications under a different license. They generally can't re-license existing software or retroactively. But companies like Elasticsearch have switched from Apache 2.0 to closed source, and recently to AGPLv3. Opensearch remains Apache 2.0 and has a thriving community at this point.
- Look at the wider community behind a project. Who runs it; how professional are they (e.g. a foundation), etc. How likely would it be to survive something happening to the main company behind a thing? Companies tend to be less resilient than the open source projects they create over time. They fail, are subject to mergers and acquisitions, can end up in the hands of hedge funds, or big consulting companies like IBM. Many decades old OSS projects have survived multiple such events. Which makes them very safe bets.
None of these points have to be decisive. If you really like a company, you might be willing to overlook their less than ideal licensing or other potential red flags. And some things are not that critical if you have to replace them. This is about assessing risk and balancing the tradeoff of value against that.
Forks are always an option when bad things happen to projects. But that only works if there's a strong community capable of supporting such a fork and a license that makes that practical. The devil is in the details. When Redis announced their license change, the creation of Valkey was a foregone conclusion. There was just no way that wasn't going to happen. I think it only took a few months for the community to get organized around that. That's a good example of a good community.
With open source, the good news is that that the version you currently have will always be available to you in perpetuity - including all the bugs, missing features, and security flaws. If you're ok with that, then the community around the thing doesn't even matter.
License terms don't end there. There is a no warranty clause too in almost every open source license and it is as important as the other parts of the license. There is no promise or guarantees for updates or future versions.
I think this is where the problem/misunderstanding is. There's no "I will do/release" in OSS unless promised explicitly. Every single release/version is "I released this version. You are free to use it". There is no implied promise for future versions.
Released software is not clawed back. Everyone is free to modify(per license) and/or use the released versions as long as they please.
Even if the source was always provided (and even if it were GPL), any bug reports/support requests etc. would be limited to paying customers.
I realize there is already a similar model where the product/source itself is always free and then they have a company behind it that charges for support... but in those cases they are almost always providing support/accepting bug reports for free as well. And maybe having the customer pay to receive the product itself in the first place, might motivate the developers to help more than if they were just paying for a support plan or something.
But a reasonable cost for the product itself, that's maybe not as high as a support contract (but comes with some support), might work better.
I always warned people that if they "buy" digital things (music, movies) it's only a license, and can be taken away. And people intellectually understand that, but don't think it'll really happen. And then years go by, and it does, and then there's outrage when Amazon changes Roald Dahl's books, or they snatch 1984 right off your kindle after you bought it.
So there's a gap between what is "allowed" and what is "expected". I find this everywhere in polite society.
Was just talking to a new engineer on my team, and he had merged some PRs, but ignored comments from reviewers. And I asked him about that, and he said "Well, they didn't block the PR with Request Changes, so I'm free to merge." So I explained that folks won't necessarily block the PR, even though they expect a response to their questions. Yes, you are allowed to merge the PR, but you'll still want to engage with the review comments.
I view open source the same way. When a company offers open source code to the community, releasing updates regularly, they are indeed allowed to just stop doing that. It's not illegal, and no one is entitled to more effort from them. But at the same time, they would be expected to engage responsibly with the community, knowing that other companies and individuals have integrated their offering, and would be left stranded. I think that's the sentiment here: you're stranding your users, and you know it. Good companies provide a nice offramp when this happens.
But FOSS means “this particular set of source files is free to use and modify”. It doesn’t include “and we will forever keep developing and maintaining it forever for free”.
It’s only different if people, in addition to the FOSS license, promise any further updates will be under the same license and then change course.
And yes, there is a gray area where such a promise is sort-of implied, but even then, what do you prefer, the developers abandoning the project, or at least having the option of a paid-for version?
It's not a binary choice. I prefer the developers releasing the software under a permissive license. I agree that relying on freemium maintenance is naive. The community source lives on, perhaps the community should fork and run with it for the common good absorbing the real costs of maintenance.
Having a FOSS license is NOT enough. Idealy the copyright should be distributed across all contributors. That's the only way to make overall consensus a required step before relicensing (except for reimplementation).
Pick FOSS projects without CLAs that perform Copyright Assignment to an untrusted entity (few exceptions apply, e.g. the FSF in the past)
You should be wary always. CLA or not, nothing guarantees that the project you depend on will receive updates, not even if you pay for them and the project is 100% closed source.
What you’re suggesting is perpetuating the myth that open source means updates available forever for free. This is not and never has been the case.
What I'm suggesting is that a FOSS project without CLAs and a healthy variety of contributors does belong to the broad open source community that forms around it, while a FOSS project with such CLA is just open to a bait-and-switch scheme because the ownership stays in a single hand that can change course at a moments notice.
Whether the project stops receiving updates or not, is an orthogonal matter.
What it fails to recognize is the reality that life changes. Shit happens. There's no way to predict the future when you start out building an open source project.
(Coming from having contributed to and run several open source projects myself)
It’s been tough for us at https://pico.sh trying to figure out the right balance between free and paid and our north star is: how much does it cost us to maintain and support? If the answer scales with the number of users we have then we charge for it. We also have a litmus test for abuse: can someone abuse the service? We are putting it behind a paywall.
I find it the other way around. I feel a bit embarrassed and stressed out working with people who have paid for a copy of software I've made (which admittedly is rather rare). When they haven't paid, every exchange is about what's best for humanity and the public in general, i.e. they're not supposed to get some special treatment at the expense of anyone else, and nobody has a right to lord over the other party.
People who don't pay are often not really invested. The relationship between more work means more costs doesn't exist for them. That can make them quite a pain in my experience.
> every exchange is about what's best for humanity and the public in general
it means that they are the kind of individual who deeply care for things to work, relationships to be good and fruitful and thus if they made someone pay for something, they think they must listen to them and comply their requests, because well, they are a paying customer and the customer is always right, they gave me their money etc etc
You can care about the work and your customer will still setting healthy boundaries and accepting that wanting to do good work for them doesn't mean you are beside them.
Business is fundamentally about partnership, transactional and moneyed partnerships, but partnership still. It's best when both suppliers and customers are aware of that and like any partnership, it structured and can be stopped by both partners. You don't technically owe them more than what's in the contract and that puts a hard stop which is easy to identify if needed.
Of course I realize that, rationally, but:
* They might feel highly entitled because they paid.
* I feel more anxious to satisfy than I should probably be feeling. Perhaps even guilty for having taken money. I realize that is not a rational frame of mind to be in; it would probably change if that happened frequently. I am used to two things: There is my voluntary work, which I share freely and without expecting money; and there is my 'job' where I have to bow my head to management and do not get to pursue the work as I see fit, and I devote most of my time to - but I get paid (which also kind of happens in the background, i.e. I never see the person who actually pays me). Selling a product or a service is a weird third kind of experience which I'm not used to.
You think you need to bow your head to management in your job which, while you technically are under their authority in some ways, isn't really how I advise you to frame your relationship with your work. You are here to bring value and your manager is there to help you/ensure you do that. Still that's a framework not a rigid guiding stick. You need to learn how to manage/bend your manager if you want to thrive in the corporate world.
Same with customers. They hire you because they need your expertise. It's a dance not a tether and you need to be two to tango.
It seems to me you are not putting enough value in what you bring to the table. It's easier to say than it is to feel it and believe it but I guess it's never a bad thing to tell someone.
As DHH and Jason Fried discuss in both the books REWORK, It Doesn’t Have to Be Crazy at Work, and their blog:
> The worst customer is the one you can’t afford to lose. The big whale that can crush your spirit and fray your nerves with just a hint of their dissatisfaction.
(It Doesn’t Have to Be Crazy at Work)
> First, since no one customer could pay us an outsized amount, no one customer’s demands for features or fixes or exceptions would automatically rise to the top. This left us free to make software for ourselves and on behalf of a broad base of customers, not at the behest of any single one. It’s a lot easier to do the right thing for the many when you don’t fear displeasing a few super customers could spell trouble.
(https://signalvnoise.com/svn3/why-we-never-sold-basecamp-by-...)
But, this mechanism proposed by DHH and Fried only remove differences amongst the paying-customers. I Not between "paying" and "non-paying".
I'd think, however, there's some good ideas in there to manage that difference as well. For example to let all the customers, paying- or not-paying go through the exact same flow for support, features, bugs, etc. So not making these the distinctive "drivers" why people would pay. E.g. "you must be paying customer to get support". Obviously depends on the service, but maybe if you have other distinctive features that people would pay for (e.g. hosted version) that could work out.
I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.
It seems like the issue may be that I have WebGL disabled. The console includes messages like "Failed to create WebGL context: WebGL creation failed: * AllowWebgl2:false restricts context creation on this system."
Oh well, guess I can't use rustfs :}
Your problems could be caused by a whiny fan. Here is the source https://github.com/rustfs/rustfs
It’s used by CERN to make Petabyte-scale storage capable of ingesting data from particle collider experiments and they're now up to 17 clusters and 74PB which speaks to its production stability. Apparently people use it down to 3-host Proxmox virtualisation clusters, in a similar place as VMware VSAN.
Ceph has been pretty good to us for ~1PB scalable backup storage for many years, except that it’s a non-trivial system administration effort and needs good hardware and networking investment, and my employer wasn't fully backing that commitment. (We’re moving off it to Wasabi for S3 storage). It also leans more towards data integrity than performance, it's great at being massively-parallel and not so rapid at being single thread high-IOPs.
https://ceph.io/en/users/documentation/
https://docs.ceph.com/en/latest/
https://indico.cern.ch/event/1337241/contributions/5629430/a...
While there is a geo-replication feature for Ceph, I cannot keep using ZFS at the same time, and gluster is no longer developed, so I'm currently looking for an alternative that would work for my use case if anyone knows of a solution.
I became a Ceph admin by accident so I wasn't involved in choosing it and I'm not familiar with other things in that space. It's a much larger project than a clustered filesystem; you give it disks and it distributes storage over them, and on top of that you can layer things like the S3 storage layer, its own filesystem (CephFS) or block devices which can be mounted on a Linux server and formatted with a filesystem (including ZFS I guess, but that sounds like a lot of layers).
> "While there is a geo-replication feature for Ceph"
Several; the data cluster layer can do it in two ways (stretch clusters and stretch pools), the block device layer can do it in two ways (journal based and snapshot based), the CephFS filesystem layer can do it with snapshot mirroring, and the S3 object layer can do it with multi-site sync.
I've not used any of them, they all have their trade-offs, and this is the kind of thing I was thinking of when saying it requires more skills and effort. for simple storage requirements, put a traditional SAN, a server with a bunch of disks, or pay a cheap S3 service to deal with it. Only if you have a strong need for scalable clusters, a team with storage/Linux skills, a pressing need to do it yourself, or to use many of its features, would I go in that direction.
https://docs.ceph.com/en/latest/rados/operations/stretch-mod...
https://docs.ceph.com/en/latest/rbd/rbd-mirroring/
I can tell you that ceph is something I don't need to touch every month. Other things I have to baby more regularly
That said, it doesn't need constant management; it's excellent at staying up even while damaged. As long as the cluster has enough free space it will rebuild around any hardware failure without human intervention, it doesn't need hot spares; if you plan it carefully then it has no single point of failure. (The original creator introduces the design choice of 'placement groups' and tradeoffs in this video[1]).
Most of the management time I've spent has been ageing hardware flaking out without actually failing - old disks erroring on read, controllers failing and dropping all the disks temporarily causing tens of seconds of read latency which had knock-on effects, or when we filled it too full and it went read-only. Other management work has been learning my way around it, upgrades, changing the way we use it for different projects, onboarding and offboarding services that use it, all of which will vary with what you actually do with it.
I've spent less time with VMware VSAN, but VSAN does a lot less, it takes your disks and gives you a VMFS datastore and maybe an iSCSI target. There can't be many alternatives which do what Ceph does, and require less skill and effort, and don't involve paying a vendor to manage it for you and give you a web interface?
> Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software).
MinIO is dealing with two out of the three issues, and the company is partially providing work for free, how is that "completely different"?
You could argue that they got to the point where the benefit wasn’t worth the cost, but this was their business model. They would not have gotten to the point where the could have a commercial-only operation without the adoption and demand generated from the OSS version.
Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.
No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users. It's worse if you are not being paid, but I'm not sure why you are asserting dealing with bullshit is just peachy if you are being paid.
I hate when people mistreat the people that provide services to them: doesn't matter if it's a volunteer, underpaid waitress or well paid computer programmer. The mistreatment doesn't become "ok" because the person being mistreated is paid.
People are angry about minio , but that’s because of their rugpull.
Did minio create the impression to its contributors that it will continue being FLOSS?
No, what was minio is now aistor, a closed-source proprietary software. Tell me how to fork it and I will.
> they wanted to be the only commercial source of the software
The choice of AGPL tells me nothing more than what is stated in the license. And I definitely don't intend to close the source of any of my AGPL-licensed projects.
https://github.com/minio/minio/fork
The fact that new versions aren't available does nothing to stop you from forking versions that are. Or were - they'll be available somewhere, especially if it got packaged for OS distribution.
> You may not modify, reverse engineer, decompile, disassemble, or create derivative works of the Software.
most projects won't ever reach that level though.
OP sure makes it sound like it's about the money.
> for someone that just wants to put code out there that is very draining and unpleasant.
I never understood this. Then why publish the code in the first place? If the goal is to help others, then the decent thing would be to add documentation and support the people who care enough to use your project. This doesn't mean bending to all their wishes and doing work you don't enjoy, but a certain level of communication and collaboration is core to the idea of open source. Throwing some code over the fence and forgetting about it is only marginally better than releasing proprietary software. I can only interpret this behavior as self-serving for some reason (self-promotion, branding, etc.).
Then the third user shows up. They have an odd edge case and the code isn't working. Fixing it will take some back and forth but it still can be done in a respectable amount of time. All is good. A few more users might show up, but most open source projects will maintain a small audience. Everyone is happy.
Sometimes, projects keep gaining popularity. Slowly at first, but the growth in interest is there. More bug reports, more discussions, more pull requests. The author didn't expect it. What was doable before takes more effort now. Even if the author adds contributors, they are now a project and a community manager. It requires different skills and a certain mindset. Not everyone is cut out for this. They might even handle a small community pretty well, but at a certain size it gets difficult.
The level of communication and collaboration required can only grow. Not everyone can deal with this and that's ok.
First of all, when a project grows, its core team of maintainers can also grow, so that the maintenance burden can be shared. This is up to the original author(s) to address if they think their workload is a problem.
Secondly, and coming back to the post that started this thread, the comment was "working for free is not fun", implying that if people paid for their work, then it would be "fun". They didn't complain about the amount of work, but about the fact that they weren't financially compensated for it. These are just skewed incentives to have when working on an open source project. It means that they would prioritize support of paying customers over non-paying users, which indirectly also guides the direction of the project, and eventually leads to enshittification and rugpulls, as in MinIO's case.
The approach that actually makes open source projects thrive is to see it as an opportunity to build a community of people who are passionate about a common topic, and deal with the good and the bad aspects as they come. This does mean that you will have annoying and entitled users, which is the case for any project regardless of its license, but it also means that your project will be improved by the community itself, and that the maintenance burden doesn't have to be entirely on your shoulders. Any successful OSS project in history has been managed this way, while those that aren't remain a footnote in some person's GitHub profile, or are forked by people who actually understand open source.
Fundamentally your post boils down to this: All contributions should be self funded by the person making them.
This might seem acceptable at first glance, but it has some really perverse implications that are far worse than making a product customers are willing to pay for.
To be granted the right to work on an open source project, you must have a day job that isn't affiliated with the project. You must first work eight hours a day to ensure your existence, only after those eight hours are up, are you allowed to work on the open source project.
Every other form of labor is allowed to charge for money, even the street cleaner or the elderly janitor stocking up on his pension, everyone except the open source developer and that everyone includes people who work on behalf of a company that directly earns money off the open source project, including software developers hired by said company even if those software developers work full time on the open source project. This means that you can run into absurd scenarios like SF salaries being paid for contributors, while the maintainer, who might be happy with an average polish developer salary doesn't even get the little amount they would need to live a hermit life doing nothing but working on the project. No, that maintainer is expected, I mean obligated, to keep working his day job to then be granted the privilege of working for free.
Somehow the maintainer is the selfish one for wanting his desire to exist be equally as important as other people's desire for the project to exist. The idea that people value the project but not the process that brings about the project sounds deeply suspect.
Your complaint that prioritizing paid feature is bad is disturbing, because of the above paragraph. The maintainer is expected to donate his resources for the greater good, but in instances where the maintainer could acquire resources to donate to the public at large through the project itself, he must not do so, because he must acquire the donation resources through his day job. To be allowed to prioritize the project, he must deprioritize the project.
The strangest part by far though is that if you are a company that produces and sells proprietary software, you're the good guy. As I said in the beginning. This feels like a very anti OSS stance since open source software is only allowed to exist in the shadow of proprietary software that makes money. The argument is always that certain types of software should not exist and that the things that are supposedly being withheld are more important than the things being created.
I personally think this type of subtractive thinking is very insidious. You can have the best intentions in the world and still be branded the devil. Meanwhile the devil can do whatever he wants. There is always this implicit demand that you ought to be an actual devil for the good of everyone.
Because these things take entirely different skill sets and the latter might be a huge burden for someone who is good at the former.
There is obligation to a given user only if it's explicitly specified in a license or some other communication to which the user is privy.
A little side project might grow and become a chore / untenable, especially with some from the community expecting handouts without respect.
Case in point, reticulum. Also Nolan Lawson has a very good block post on it.
I don't think your position is reasonable even if I believe you just want to say that writing open source shouldn't be a main source of the income). I think it's perfectly okay to be rewarded for time, skill, effort, and a software itself.
Here’s my take on “the open source philosophy,” having benefited from it since the 90s. Note, I am not nearly as much of a zealot as RMS, and have no strong opinion on GPL vs BSD style licensing; use whichever meets your needs and future plans.
If I had needed to pay for a Linux distribution as a kid, it’s unlikely I would have been able to explore it.
If I was unable to figure out software behavior by studying its source code, I would have many unanswered questions today, and Debian’s vixie-cron would likely still have an obscure bug [1].
I, like practically all people in the tech industry, owe a great deal to people who have given their time to various projects. Some of those people make a living out of it (Daniel Stenberg, for example), but also still offer their software gratis. Therefore, I feel a moral obligation to do so in return.
0: https://www.gnu.org/philosophy/selling.html
1: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1019716
Users of open source often feel entitled, open issues like they would open a support ticket for product they actually paid for, and don't hesitate to show their frustration.
Of course that's not all the users, but the maintainers only see those (the happy users are usually quiet).
I have open sourced a few libraries under a weak copyleft licence, and every single time, some "people from the community" have been putting a lot of pressure on me, e.g. claiming everywhere that the project was unmaintained/dead (it wasn't, I just was working on it in my free time on a best-effort basis) or that anything not permissive had "strings attached" and was therefore "not viable", etc.
The only times I'm not getting those is when nobody uses my project or when I don't open source it. I have been open sourcing less of my stuff, and it's a net positive: I get less stress, and anyway I wasn't getting anything from the happy, quiet users.
This is also true to some extend when it's a project you started. I don't think you should e.g. be able to point to the typical liability disclaimer in free software licenses when you add features that intentionally harm your users.
No. If it's free and open source, all it says is what you can do with the code. There is no obligation towards the users whatsoever.
If you choose to depend on something, it's your problem. The professional way to do it is either to contractually make sure that the project doesn't "fuck you over" (using your words), or to make sure that you are able to fork the project if necessary.
If you base your business on the fact that someone will be working for you, for free, forever, then it's your problem.
It's a trade off, we made it collectively.
Good luck with the back pain.
Open source can be very fun if you genuinely enjoy it.
The problem is dealing with people that have wrong expectations, those need to be ignored.
Sofar I've switched to Rustfs which seems like a very nice project, though <24hrs is hardly an evaluation period.
Object storage has advantages over regular block storage if it is managed by cloud, and if it has a proven record on durability, availability and "infinite" storage space at low costs, such as S3 at Amazon or GCS at Google.
Object storage has zero advantages over regular block storage if you run it on yourself:
- It doesn't provide "infinite" storage space - you need to regularly monitor and manually add new physical storage to the object storage.
- It doesn't provide high durability and availability. It has lower availability comparing to a regular locally attached block storage because of the complicated coordination of the object storage state between storage nodes over network. It usually has lower durability than the object storage provided by cloud hosting. If some data is corrupted or lost on the underlying hardware storage, there are low chances it is properly and automatically recovered by DIY object storage.
- It is more expensive because of higher overhead (and, probably, half-baked replication) comparing to locally attached block storage.
- It is slower than locally attached block storage because of much higher network latency compared to the latency when accessing local storage. The latency difference is 1000x - 100ms at object storage vs 0.1ms at local block storage.
- It is much harder to configure, operate and troubleshoot than block storage.
So I'd recommend taking a look at other databases for logs, which do not require object storage for large-scale production setups. For example, VictoriaLogs. It scales to hundreds of terabytes of logs on a single node, and it can scale to petabytes of logs in cluster mode. Both modes are open source and free to use.
Disclaimer: I'm the core developer of VictoriaLogs.
While I try to avoid complexity, idiomatic approaches have their advantages; it's always a trade-off.
That said my first instinct when I saw minio's status was to use filestorage but the rustfs setup has been pretty painless sofar. I might still remove it, we'll see.
Worth adding, this depends on what's using your block storage / object storage. For Loki specifically, there are known edge-cases with large object counts on block storage (this isn't related to object size or disk space) - this obviously isn't something I've encountered & I probably never will, but they are documented.
For an application I had written myself, I can see clearly that block storage is going to trump object storage for all self-hosted usecases, but for 3P software I'm merely administering, I have less control over its quirks & those pros -vs- cons are much less clear cut.
Self hosted or just using git itself is only solution
Even plain terminals are now "agentic orchestrators": https://www.warp.dev
A Ponzi can be a good investment too (for a certain definition of "good") as long as you get out before it collapses. The whole tech market right now is a big Ponzi with everyone hoping to get out before it crashes. Worse, dissent risks crashing it early so no talks of AI limitations or the lack of actual, sustainable productivity improvements are allowed, even if those concerns do absolutely happen behind closed doors.
Yes.
https://www.theguardian.com/technology/2017/dec/21/us-soft-d...
"Long Island Iced Tea Corp [...] In 2017, the corporation rebranded as Long Blockchain Corp [...] Its stock price spiked as much as 380% after the announcement."
They could just archive it there and then, at least it would be honest. What a bunch of clowns.
Unfortunately I don't know of any other open projects that can obviously scale to the same degree. I built up around 100PiB of storage under minio with a former employer. It's very robust in the face of drive & server failure, is simple to manage on bare hardware with ansible. We got 180Gbps sustained writes out of it, with some part time hardware maintenance.
Don't know if there's an opportunity here for larger users of minio to band together and fund some continued maintenance?
I definitely had a wishlist and some hardware management scripts around it that could be integrated into it.
If there is a real community around it, forking and maintaining an open edition will be a no-brainer.
Is AIStor Free really free like they claim here https://www.min.io/pricing, i.e
Free
For developers, researchers, enthusiasts, small organizations, and anyone comfortable with a standalone deployment.
Full-featured, single-node deployment architecture
Self-service community Slack and documentation support
Free of charge
I could use that if it didn't have hidden costs or obligations.I will have to migrate, the cost of "self hosting" what a pain!
Looks like i'm gonna give seaweedfs a whirl instead of hunting down the docker image and sha of the last pre-enshittified version of Minio
In the Ruby on Rails space, we had this happen recently with the prawn_plus Gem where the original author yanked all published copies and deleted the GitHub repository.
On GitHub, when a private repo is deleted forks are deleted. But for public repos, the policy is different. See https://docs.github.com/en/pull-requests/collaborating-with-....
This is the latest of a sunset trap set for those of us who use Minio for local testing but not production use.
1. RustFS and SeaweedFS are the fastest in the object storage field.
2. The installation for Garage and SeaweedFS is more complex compared to RustFS.
3. The RustFS console is the most convenient and user-friendly.
4. Ceph is too difficult to use; I wouldn't dare deploy it without a deep understanding of the source code.
Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.
Furthermore, Milvus gave RustFS a very high official evaluation. Based on technical benchmarks and other aspects, I believe RustFS will ultimately win.
https://milvus.io/blog/evaluating-rustfs-as-a-viable-s3-comp...
That's an odd take... open source is a software licensing model, not a business model.
Unless you have some knowledge that I don't, MinIO never asked for nor accepted donations from users of their open source offerings. All of their funding came from sales and support of their enterprise products, not their open source one. They are shutting down their own contributions to the open source code in order to focus on their closed enterprise products, not due to lack of community engagement or (as already mentioned) community funding.
You mean like Linux, Python, PostgreSQL, Apache HTTP Server, Node.js, MariaDB, GNU Bash, GNU Coreutils, SQLite, VLC, LibreOffice, OpenSSH?
Not many open source projects are Linux-sized. Linux is worth billions of dollars and enabled Google and Redhat to exist, so they can give back millions, without compulsion, and in a self-interested way.
Random library maintainer dude should not expect their (very replaceable) library to print money. The cool open source tool/utility could be a 10-person company, maybe 100 tops, but people see dollar-signs in their eyes based on number of installs/GitHub stars, and get VC funding to take a swing for billions in ARR.
I remember when (small scale) open source was about scratching your own itch without making it a startup via user-coercion. It feels like the 'Open source as a growth-hack" has metastasized into "Now that they are hooked, entire user base is morally obligated to give me money". I would have no issue if a project included this before it gets popular - but that may prevent popular adoption. So it rubs me the wrong way when folk want to have their cake and eat it.
Uh, no, OpenAI didn't pivot from being open in order to survive.
They survived for 7 years before ChatGPT was released. When it was, they pivoted the _instant_ it became obvious that AI was about to be a trillion-dollar industry and they weren't going to miss the boat of commercialization. Yachts don't buy themselves, you know!
Yes, open-source is a software license model, not a business model. It is also not a software support model.
This change is them essentially declaring that MinIO is EOL and will not have any further updates.
For comparison, Windows 10 which is a paid software released in the same year as first minio release i.e. 2015 is already EOL.
Just fork it!
Maintainer of Milvus here. A few thoughts from someone who lives this every day:
1. The free user problem is real, and AI makes it worse. We serve a massive community of free Milvus users — and we're grateful for them, they make the project what it is. But we also feel the tension MinIO is describing. You invest serious engineering effort into stability and bug fixes, and most users will never become paying customers. In the AI era this ratio only gets harder — copy with AI becomes easier than ever
2. We need better object storage options. As a heavy consumer of object storage, Milvus needs a reliable, performant, and truly open foundation. RustFS is a solid candidate — we've been evaluating it seriously. But we'd love to see more good options emerge. If the ecosystem can't meet our needs long-term, we may have to invest in building our own.
3. Open source licensing deserves a serious conversation. The Apache 2.0 / Hadoop-era model served us well, but cracks are showing. Cloud vendors and AI companies consume enormous amounts of open-source infrastructure, and the incentives to contribute back are weaker than ever. I don't think the answer is closing the source — but I also don't think "hope enterprises pay for support" scales forever. We need the community to have an honest conversation about what sustainable open source looks like in the AI era. MinIO's move is a symptom worth paying attention to.It’s been amazing to watch Milvus grow from its roots in China to gaining global trust and major VC backing. You've really nailed the commercialization, open-source governance, and international credibility aspects.
Regarding RustFS, I think that—much like Milvus in the early days—it just needs time to earn global trust. With storage and databases, trust is built over years; users are naturally hesitant to do large-scale replacements without that long track record.
Haha, maybe Milvus should just acquire RustFS? That would certainly make us feel a lot safer using it!
1. Download or build the single binary into your system (install like `/usr/local/sbin/garage`)
2. Create a file `/etc/garage.toml`:
metadata_dir = "/data/garage/meta"
data_dir = "/data/garage/data"
db_engine = "sqlite"
replication_factor = 1
rpc_bind_addr = "[::]:3901"
rpc_public_addr = "127.0.0.1:3901"
rpc_secret = "[your rpc secret]"
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.garage.localhost"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.garage.localhost"
index = "index.html"
[k2v_api]
api_bind_addr = "[::]:3904"
[admin]
api_bind_addr = "[::]:3903"
admin_token = "woG4Czw6957vNTXNfLABdCzI13NTP94M+qWENXUBThw="
metrics_token = "3dRhgCRQQSxfplmYD+g1UTEZWT9qJBIsI56jDFy0VQU="
3. Start it with `garage server` or just have an AI write an init script or unit file for you. (You can pkill -f /usr/local/sbin/garage to shut it down.)Also, NVIDIA has a phenomenal S3 compatible system that nobody seems to know about named AIStore: https://aistore.nvidia.com/ It's a bit more complex, but very powerful and fast (faster than MinIO - slightly less space efficient than MinIO because it maintains a complete copy of an object on a single node so that the object doesn't have to be reconstituted as it would on MinIO.) It also can be a proxy in front of other S3 systems, including AWS S3 or GCS etc and offer a single unified namespace to your clients.
IMO, Seaweedfs is still too much of a personal project, it's fast for small files, but keep good and frequent backups in a different system if you choose it.
I personally will avoid RustFS. Even if it was totally amazing, the Contributor License Agreement makes me feel like we're getting into the whole Minio rug-pull situation all over again, and you know what they say about doing the same thing and expecting a different result..
[1] https://garagehq.deuxfleurs.fr/documentation/cookbook/real-w... [2] https://www.symas.com/mdb [3] https://www.openldap.org/
I could see running Aistore in single binary mode for small deployments, but for anything large and production grade I would not touch Aistore. Ceph is going to be the better option IMO, it is a truly collaborative open source project developed by multiple companies with a long track record.
What legal risks does it help mitigate?
MinIO was more for the "mini" use case (or more like "anything not large scale", with a very broad definition of large scale). Here "works out of the box" is paramount.
And Ceph is more for the maxi use case. Here in depth fine tuning, highly complex setups, distributed setups and similar are the norm. Hence out of the box small scale setup experience is bearly relevant.
So they really don't fill out the same space, even through their functionality overlaps.
I'm not sure if SeaweedFS is comparable. It's based on Facebook's Haystack design, which is used to address a very specific use case: minimizing the IOs, in particular the metadata lookup, for accessing individual objects. This leads to many trade-offs. For instance, its main unit of operations is on volumes. Data is appended to a volume. Erasure coding is done per volume. Updates are done at volume level, and etc.
On the other hand, a general object store goes beyond needle-in-a-haystack type of operations. In particular, people use an object store as the backend for analytics, which requires high-throughput scans.
Claude Code is amazing at managing Ceph, restoring, fixing CRUSH maps, etc. It's got all the Ceph motions down to a tee.
With the tools at our disposal nowadays, saying "I wouldn't dare deploy it without a deep understanding of the source code" seems like an overexaggeration!
I encourage folks to try out Ceph if it supports their usecase.
In my experience, SeaweedFS has at least 3–5× better performance than MinIO. I used MinIO to host 100 TB of images to serve millions of users daily.
For me, my only use for Minio was to simulate AWS S3 in docker compose so that my applications were fully testable locally. I never used it it production or as a middle ware. It has not sat well with me to use alternative strategies like Ruby on Rails' local file storage for testing as it behaves differently than when the app is deployed. And using actual cloud services creates its own hurdles of either credential sharing among developers and gets rid of the "docker magic" of being to run a single set up script and be up and running to change code and run the full test suite.
My use case is any developer on the team can do a Git clone and run the set up script and then be fully up and running within minutes locally without any special configuration on their part.
New standards and features are emerging constantly—such as S3 over RDMA, S3 Append, cold storage tiers, and S3 vector buckets.
In at most two or three years, relying on an unmaintained version of MinIO will likely become a liability that drags down your project as your production environment evolves. Finding an actively maintained open-source alternative is a must.
Disclosure, I’m a SWE at LocalStack.
> As a result of this shift, we cannot commit to releasing regular updates to the Community edition of LocalStack for AWS.
https://blog.localstack.cloud/the-road-ahead-for-localstack/
https://blog.localstack.cloud/the-road-ahead-for-localstack/...
But IMO, LocalStack community’s S3 service is pretty stable, so I’m doubtful there’ll be much parity drift in the short to medium term.
- Obviously, when your selling point against competitor and alternative services was that you were Open Source, and you do a rug pull once you got enough traction, that is not great.
- But also they also switched of target. The big added value of Minio initially is that it was totally easy to run, targeting the possibility to have an s3 server running in a minute, on single instances and so... That was the perfect solution for rapid tests, local setups and automatic testing. Then, again once they started to get enough traction, they didn't just move to add more "scaling" solutions to Minio, but they kind of twisted it completely to be a complex deployment scalable solution like any other one that you find in the market. Without that much added value on that count to be honest.
With application dependencies you can swap a library in a day. With object storage that's holding your data, you're looking at a migration measured in weeks or months. The S3 API compatibility helps, but anyone who's actually migrated between S3-compatible stores knows there are subtle behavioral differences that only surface under load.
I wonder how many MinIO deployments had a documented migration runbook before today.
SeaweedFS was started as a learning project and evolves along the way, getting ideas from papers for Facebook Haystack, Google Colossus, Facebook Tectonics. With its distributed append-only storage, it naturally fits object store. Sorry to see MinIO went away. SeaweedFS learned a lot from it. Some S3 interface code was copied from MinIO when it was still Apache 2.0 License. AWS S3 APIs are fairly complicated. I am trying to replicate as much as possible.
Some recent developments:
* Run "weed mini -dir=xxx", it will just work. Nothing else to setup.
* Added Table Bucket and Iceberg Catalog.
* Added admin UI
That said, we also can't blame people for using open source without paying or donating.
I can absolutely take issue with people demanding things of open source projects. They can contribute or pay if they want to be demanding around bug fixes and support.
I've been a big proponent of open source for many years - learning from, contributing to, maintaining, sharing my own projects for free as open-source. I don't expect anything in return.
In fact, open-source projects benefit from contributors. So to me it's a bit incompatible with taking money. Money for what? For who? If it supports the project I'm ok with that, but I've also seen it line the pockets of original authors. I've seen original authors then turn other people's contributions through hard work into a business.
There is a very line between a funded community project and getting free labor for a business.
I take serious issue with open source projects magically one day turning into a business build on the backs of others for free. Not saying that about minIO or any other project. I'm just saying that happens.
They stopped maintaining but they forked it to a proprietary product.
It makes sense for a corporate. Still Minio is there to fork, maintain and enhance in a different direction.
Kudos to them.
They maintain the Docker image, so it works great in a k8s environment, for both local and remote development.
Big thanks to MinIO for providing this option for so many years. I genuinely wish them the best.
It was pretty clear they pivoted to their closed source repo back then.