You can't actually read real values from Parameters/exports (you get a token placeholder) so you can't store JSON then read it back and decode (unless in same stack, which is almost pointless). You can do some hacks with Fn:: though.
Deploying certain resources that have names specified (vs generated) often breaks because it has to create the new resource before destroying the old one, which it can't, because the name conflicts (it's the same name...cause it's the same construct).
It's wildly powerful though, which is great. But we have basically had to create our own internal library to solve what should be non-problems in an IaC system.
Would be hilarious if my coworker stumbled upon this. I know he reads hn and this has been my absolute crusade this quarter.
I’m a little puzzled. How are you getting dependency deadlocks if you’re not creating circular dependencies?
Also, exports in CloudFormation are explicit. I don’t see how this automatic pruning would occur.
> Deploying certain resources that have names specified (vs generated) often breaks
CDK tries to prevent this antipattern from happening by default. You have to explicitly make it name something. The best practice is to use tags to name things, not resource names.
This is a tricky issue. Here is how we fixed it:
Assume you have a stack with the ConstructID of `foo-bar`, and that uses resources exported to `charlie`.
Update the Stack ConstructID to be a new value, ie `foo-bar-2`. Then at the very end of your CI, add a `cdk destroy foo-bar` to delete the original stack. This forces a new deployment of your stack, which has new references. Then, `charlie` updates with the new stack and the original `foo-bar` stack can be safely destroyed once `charlie` successfully updates.
The real conundrum is with data - you typically want any data stacks (Dynamo, RDS, etc) to be in their own stack at the very beginning of your dependency tree. That way any revised stacks can be cleanly destroyed and recreated without impacting your data.
Linux powers the world in this area and bash is the glue which executes all these commands on servers.
Any program or language you write to try and 'revolutionise CI' and be this glue will ultimately make the child process call to a bash/sh terminal anyhow and you need to read both stdout and stderr and exit codes to figure out next steps.
Or you can just use bash.
Why? We've spent years upon years upon years of building systems that enshittify processes. We've spent years losing talent in the industry and the trends aren't going to reverse. We are our own worst enemy, and are directly responsible for the state of the industry, and to an extent, the world.
To not call out bullshit where one sees it, is violence.
> But if you’re running a real production system, if you have a monorepo, if your builds take more than five minutes, if you care about supply chain security, if you want to actually own your CI: look at Buildkite.
Goes in line with exactly what I said in 2020 [0] about GitHub vs Self-hosting. Not a big deal for individuals, but for large businesses it's a problem if you can push that critical change when your CI is down every week.
I get it's quirky, but I'm at a low energy state and just wanted to know what it does...
Right before I churned out, I happened to click "[E] Exit to classic Buildkite" and get sent to their original homepage: https://buildkite.com/platform/
It just tells you what it Buildkite does! Sure it looks default B2B SaaS, but more importantly it's clear. "The fastest CI platform" instead of some LinkedIn-slop manifesto.
If I want to know why it's fast, I scroll down and learn it scales to lots of build agents and has unlimited parallelism!
And if I wonder if it plays nice with my stack, I scroll and there's logos for a bunch of well known testing frameworks!
And if I want to know if this isn't v0.0001 pre-alpha software by a pre-seed company spending runway on science-fair home pages, this one has social proof that isn't buried in a pseudo-intellectual rant!
-
I went down the rabbit hole of what lead to this and it's... interesting to say the least.
https://medium.com/design-bootcamp/nothing-works-until-you-m...
https://www.reddit.com/r/branding/comments/1pi6b8g/nothing_w...
https://www.reddit.com/r/devops/comments/1petsis/comment/nsm...
Glad that the classic site hit the mark, but a lot work to do to make that clearer than it is; we're working on the next iteration that will sunset the CLI homepage into an easter egg.
Happy to take more critique, either on the execution or the rabbit hole.
You brought up Planetscale's markdown homepage rework in one of those posts and I actually think it's great... but it's also clear, direct, and has no hidden information.
I'd love to see what happens to conversions once you retire this to an Easter Egg.
I say that not because we wanted the CLI homepage to be 'legit', the light context there is we needed a way to quickly change direction from a previous failed initiative that added stark category marketing across the classic site... so took the opportunity to do purposefully do something very different from conventions, rightly or wrongly.
Over the years CI tools have gone from specialist to generalist. Jenkins was originally very good at building Java projects and not much else, Travis had explicit steps for Rails projects, CircleCI was similarly like this back in the day.
This was a dead end. CI is not special. We realised as a community that in fact CI jobs were varied, that encoding knowledge of the web framework or even language into the CI system was a bad idea, and CI systems became _general workflow orchestrators_, with some logging and pass/fail UI slapped on top. This was a good thing!
I orchestrated a move off CircleCI 2 to GitHub Actions, precisely because CircleCI botched the migration from the specialist to generalist model, and we were unable to express a performant and correct CI system in their model at the time. We could express it with GHA.
GHA is not without its faults by any stretch, but... the log browser? So what, just download the file, at least the CI works. The YAML? So it's not-quite-yaml, they weren't the first or last to put additional semantics on a config format, all CI systems have idiosyncrasies. Plugins being Docker images? Maybe heavyweight, but honestly this isn't a bad UX.
What does matter? Owning your compute? Yeah! This is an important one, but you can do that on all the major CI systems, it's not a differentiator. Dynamic pipelines? That's really neat, and a good reason to pick Buildkite.
My takeaway from my experience with these platforms is that Actions is _pretty good_ in the ways that truly matter, and not a problem in most other ways. If I were starting a company I'd probably choose Buildkite, sure, but for my open source projects, Actions is good.
The systems I like to design that use GHA usually only use the good parts. GitHub is a fine events dispatcher, for instance, but a very bad workflow orchestrator. So delegate that to a system that is good at that instead
They answer your "so what" quite directly:
>> Build logs look like terminal output, because they are terminal output. ANSI colors work. Your test framework’s fancy formatting comes through intact. You’re not squinting at a web UI that has eaten your escape codes and rendered them as mojibake. This sounds minor. It is not minor. You are reading build logs dozens of times a day. The experience of reading them matters in the way that a comfortable chair matters. You only notice how much it matters after you’ve been sitting in a bad one for six hours and your back has filed a formal complaint.
Having to look mentally ignore ANSI escape codes in raw logs (let alone being unable to unable to search for text through them) is annoying as hell, to put it mildly.
And how do you expect people to even know about this workaround, and how to search for text with it? It's not like the GitHub UI even tells you. Not everyone is a Linux pro.
Nobody is saying it's impossible to get past the ANSI escape codes. People eventually figure out ways to do it. The claim is how much of your time do you want to lose to friction in that process, which you have to repeated frequently. It's insane for it to be this hard.
You have a tool here, which is noted elsewhere: it's "less --raw". Also there's another tool which analyzes your logs and color codes them: "lnav".
lnav is incredibly powerful and helps understanding what's happening, when, where. It can also tail logs. Recommended usage is "your_command 2>&1 | lnav -t".
In game development we care a lot about build systems- and annoyingly, we have vanishingly few companies coming to throw money at our problems.
The few that do, charge a kings ransom (Incredibuild). Our build times are pretty long, and minimising them is ideal.
If, then, your build system does not understand your build-graph then you’re waiting even longer for builds or you’re keeping around incremental state and dirty workspaces (which introduces transient bugs, as now the compiler has to do the hard job of incrementally building anyway).
So our build systems need to be acutely aware of the intricacies of how the game is built (leading to things like UnrealEngine Horde and UBA).
If we used a “general purpose” approach we’d be waiting in some cases over a day for a build, even with crazy good hardware.
WRT github actions... I agree with OOP, they leave much to be desired, esp when working on high-velocity work. My ci/cd runs locally first and then GHA is (slower) verification, low-noise, step.
Game dev has a serious case of NIH - sometimes for good reasons but in lots of cases it’s because things have been set up in a way that makes changing that impractical. Using UBA as an example - FastBuild, Incredibuild, SNDBS Sccache all exist as either caching or distribution systems. Compiling a game engine isn’t much different to compiling a web browser (which ninja was written for).
I’ve worked at two game studios where we’ve used general purpose CI systems and been able to push out builds in < 15 minutes. Horde and UBA exist to handle how epic are doing things internally, rather than as an inherent requirement on how to use the tools effectively. If you don’t have the same constraints as developing Unreal Engine (and Fortnite) then you don’t have the same needs.
(I worked for epic when horde came online, but don’t any more).
Except for GitHub charging you monthly to run your own CI jobs on your own hardware.
- Intermediate tasks are cached in a docker-like manner (content-addressed by filesystem and environment). Tasks in a CI pipeline build on previous ones by applying the filesystem of dependent tasks (AFAIU via overlayfs), so you don't execute the same task twice. The most prominent example of this is a feature branch that is up-to-date with main passes CI on main as soon as it's merged, as every task on main is a cache-hit with the CI execution on the feature branch.
- Failures: the UI surfaces failures to the top, and because of the caching semantics, you can re-run just the failed tasks without having to re-run their dependencies.
- Debugging: they expose a breakpoint (https://www.rwx.com/docs/rwx/remote-debugging) command that stops execution during a task and allows you to shell into the remote container for debugging, so you can debug interactively rather than pushing `env` and other debugging tasks again and again. And when you do need to push to test a fix, the caching semantics again mean you skip all the setup.
There's a whole lot of other stuff. You can generate tasks to execute in a CI pipeline via any programming language of your choice, the concurrency control supports multiple modes, no need for `actions/cache` because of the caching semantics and the incremental caching feature (https://www.rwx.com/docs/rwx/tool-caches).
And I've never had a problem with the logs.
If you have a dockerfile where you make a small change in your source results in one particular very large layer that has to be built, then you want to fan out and run many parallel tests using that image, what actually happens when you try to run that new fat layer on a bunch of compute, and how is it better than the implied naive solution? That fat layer exists on a storage system somewhere, and a bunch of computer nodes need to read it, what happens?
1. We don't gzip layers like Docker does. Gzip is really slow, and it's much slower than the network. Storage is cheap. So it's much faster to transmit uncompressed layers than to transmit compressed layers and decompress them.
2. We've heavily tuned our agents for pulling layers fast. Disk throughput and IOPS are really important so we provision those higher than you typically would for running workloads in the cloud. When pulling layers we modify kernel parameters like the dirty_ratio to values that we've empirically found with layer pulls. We make sure we completely exhaust our network bandwidth and throughput when pulling layers. And so on.
3. This third one is experimental and something we're actively working on improving, but we have our own underlying filesystem which lazily loads the files from a layer instead of pulling tons of (potentially unneeded) files up front. This is similar to AWS's [Seekable OCI](https://github.com/awslabs/soci-snapshotter) but tuned for our particular needs.
I've been slowly working on improving our documentation to explain these kinds of differentiators that our architecture and container runtime provide, but most of it is unpublished so far. We definitely need to do a much better job of explaining _how_ we are faster and better rather than just stating it :).
The other side of this is that we also made _building_ those layers much much faster. We blogged a little bit about it at https://www.rwx.com/blog/we-deleted-our-dockerfiles but just to hit some quick notes: in RWX you can vary the compute by task, and it turns out throwing a big machine at (e.g.) `npm install` is quite effective. Plus we make using an incremental cache very easy, and layers generated from an incremental cache are only the incremental parts, so they tend to be smaller. And we're a DAG, so you can parallelize your setup in a way that is very painful to do with Docker, even when using multi-stage builds. And our cache registry is global and very hard to mess up, whereas a lot of people misconfigure their Docker caches and have cache misses all over their docker builds. And we have miss-then-hit semantics for caching. Okay, I'm rambling now! But happy to go into more depth on any of this!
Once it was updated to latest and all the bad old manually created jobs were removed it was decent.
There are numerous ways to shoot yourself in the foot, though, and everything must be configured properly to get to feature parity with GHA (mail server, plugins, credentials, sso, https, port forwarding, webhooks, GitHub app, ...).
But once those are out of the way, its the most flexible and fastest CI system I have ever used.
In the past week I have seen:
- actions/checkout inexplicably failing, sometimes succeeding on 3rd retry (of the built-in retry logic)
- release ci jobs scheduling _twice_, causing failures, because ofc the release already exists
- jobs just not scheduling. Sometimes for 40m.
I have been using it actively for a few years and putting aside everything the author is saying, just the base reliability is going downhill.
I guess zig was right. Too bad they missed builtkite, Codeberg hasn't been that reliable or fast in my experience.
And fixing the pyro-radio bug will bring other issues, for sure, so they won't because some's workflow will rely on the fact that turning on the radio sets the car on fire: https://xkcd.com/1172/
All of my customers are on bitbucket.
One of them does not even use a CI. We run tests locally and we deploy from a self hosted TeamCity instance. It's a Django app with server side HTML generation so the deploy is copying files to the server and a restart. We implemented a Capistrano alike system in bash and it's been working since before Covid. No problems.
The other one uses bitbucket pipelines to run tests after git pushes on the branches for preproduction and production and to deploy to those systems. They use Capistrano because it's a Rails app (with a Vue frontend.) For some reason the integration tests don't run reliably neither on the CI instances nor on Macs, so we run them only on my Linux laptop. It's been in production since 2021.
A customer I'm not working with anymore did use Travis and another one I don't remember. That also run a build on there because they were using Elixir with Phoenix, so we were creating a release and deploying it. No mere file copying. That was the most unpleasant deploy system of the bunch. A lot of wasted time from a push to a deploy.
In all of those cases logs are inevitably long but they don't crash the browser.
If your CI invocations are anything more than running a script or a target on a build tool (make, etc.) where the real build/test steps exist and can be run locally on a dev workstation, you're making the CI system much more complex than it needs to be.
CI jobs should at most provide an environment and configuration (credentials, endpoints, etc.), as a dev would do locally.
This also makes your code CI agnostic - going between systems is fairly trivial as they contain minimal logic, just command invocations.
It's correct to design CI pipelines in order to offload much of the logic to subsystems, but pipelines will eventually grow in complexity and the CI config system should be designed in order not to get in the way. I don't know buildkite, but Gitlab CI is the best I know. Template and job composition works brilliantly, top-level object being the job and not the stage result in flat, easier to read config files and the packed features are really good, but it's hard to debug, the conditional logic sometimes fails in unexpected ways, it's exhausting to use the predefined variables reference and the permission system for multi project pipelines is abysmal.
I'd argue that this also dovetails very nicely with having common, shared invocations - if you can run "make test" in any repo and have it work, that makes CI code reuse even easier.
As for the complexity comments, that complexity has to go somewhere, and you should look for how to best factor the system so it's debuggable. Sometimes this may mean restructuring how your code is factored or deployed or has failure tolerance so it's easier to test, and this should be thought of as an architecture task early on.
My pet peeve with Github Actions was that if I want to do simple things like make a "release", I have to Google for and install packages from internet randos. Yes, it is possible this rando1234 is a founding github employee and it is all safe. But why does something so basic need external JS? packages?
nit: no, it was made by a group of engineers that loved git and wanted to make a distributed remote git repository. But it was acquired/bought out then subsequently enshittified by the richest/worst company on earth.
Otherwise the rest of this piece vibes with me.
All of that on top of a rock-solid system for bringing your own runner pools which lets you use totally different machine types and configurations for each type of CI job.
Highly, highly recommend.
But yes, Groovy is a much better language for defining pipelines than YAML. Honestly pretty much any programming language at all is better than YAML. YAML is fine for config files, but not for something as complex as defining a CI pipeline.
Jenkins is probably a bit like Java, technically it is fine. The problem is really where/who typically uses it and as there is so much freedom it is really easy to make a monster. Where as for Go it is a lot harder to write terrible unmaintainable code compared to Java.
There are a bunch of failures of a build that have nothing to do with how your build itself works. Asking teams to rebuild all that orchestration logic into their builds is madness. We shouldn’t ask teams to have to replicate tests for features that are in the CI they use.
Integration of code quality gates, documentation checks, linting, cross architecture builds, etc.
Most of this can be solved by doing the builds in a docker image that we also maintain ourselves. Then what remains is the interaction between the ci config for matrices, the tasks/actions to report back quality metrics, the integration with keyvaults to obtain deploy time secrets, etc.
Then there are the soft failures, missing a cache key causing many packages to be downloaded over and over again, or the same for the docker base images, etc.
We fix this for our 1000+ microservices, across hundreds of teams by maintaining a template that all services are mandated to use. It removes whole classes of errors and introduces whatever shenanigans we introduce. But it works for us.
If GHA, Azure Pipelines, etc., would provide a way of running builds locally that would speed up our development greatly.
Until then we have created linting based on CUE to parse the various yamls, resolving references to keystores, key ids, templates, etc., and making sure they exist. I think this is generic enough to open source even.
I haven't used as many CI systems as the author, but I've used, GH actions, Gitlab CI, CodeBuild, and spent a lot of time with Jenkins.
I've only touched Buildkite briefly 6 years ago, at the time it seemed a little underwhelming.
The CI system I enjoyed the most was TeamCity, sadly I've only used it at one job for about a year, but it felt like something built by a competent team.
I'm curious what people who have used it over a longer time period think of it.
I feel like it should be more popular.
But I don’t know about competent people, reading their release notes always got me thinking ”how can anyone write code where these bugs are even possible?”. But I guess that’s why many companies just write nonsense release notes today, to hide their incompetence ;)
Why do you consider TeamCity legacy? The latest release was just 2 months ago: https://www.jetbrains.com/help/teamcity/what-s-new-in-teamci...
>To make TeamCity more approachable for everyone, we’ve launched the pipelines initiative, and are investing heavily in reimagining the familiar UX. Complementing these efforts, we are excited to introduce the TeamCity AI Assistant.
Looks like it's under active development.
However, there are very real things LLMs can do that greatly reduce the pain here. Understanding 800 lines of bash is simply not the boogie man it used to be a few years ago. It completely fits in context. LLMs are excellent at bash. With a bit of critical thinking when it hits a wall, LLM agents are even great at GitHub actions.
The scariest thing about this article is the number of things it's right about. Yet my uncharacteristic response to that is one big shrug, because frankly I'm not afraid of it anymore. This stuff has never been hard, or maybe it has. Maybe it still is for people/companies who have super complex needs. I guess we're not them. LLMs are not solving my most complex problems, but they're killing the pain of glue left and right.
It’s hard to remember, sometimes, that Microsoft was one of the little gadflies that buzzed around annoying the Big Guys.
That aside, GH Actions doesn’t seem any worse than GitLab. I forget why I stopped using CircleCI. Price maybe? I do remember liking the feature where you could enter the console of the CI job and run commands. That was awesome.
I agree though that yaml is not ideal.
webhooks to an external system was such a better way to do it, and somehow we got away from that, because they don't want us to leave.
webhooks are to podcasts as github actions are to the things that spotify calls podcasts.
I've never used nix or nixos but a quick search led me to nixops, and then realized v4 is entirely being rewritten in rust.
I'm surprised they chose rust for glue code, and not a more dynamic and expressive language that could make things less rigid and easier to amend.
In the clojure world BigConfig [0], which I never used, would be my next stop in the build/integrate/deploy story, regardless of tech stack. It integrates workflow and templating with the full power of a dynamic language to compose various setups, from dot/yaml/tf/etc files to ops control planes (see their blog).
You might face that many times using Gitlab CI. Random things don’t work the way you think it should and the worst part is you must learn their stupid custom DSL.
Not only that, there’s no way to debug the maze of CI pipelines but I imagine it’s a hard thing to achieve. How would I be able to locally run CI that also interacts with other projects CI like calling downstream pipelines?
Microsoft being microsoft I guess. Making computing progressively less and less delightful because your boss sees their buggy crap is right there so why don't you use it
Back in... I don't know, 2010, we used Jenkins. Yes, that Java thingy. It was kind of terrible (like every CI), but it had a "Warnings Plugin". It parsed the log output with regular expressions and presented new warnings and errors in a nice table. You could click on them and it would jump to the source. You could configure your own regular expressions (yes, then you have two problems, I know, but it still worked).
Then I had to switch to GitLab CI. Everyone was gushing how great GitLab CI was compared to Jenkins. I tried to find out: how do I extract warnings and errors from the log - no chance. To this day, I cannot understand how everyone just settled on "Yeah, we just open thousands of lines of log output and scroll until we see the error". Like an animal. So of course, I did what anyone would do: write a little script that parses the logs and generates an HTML artifact. It's still not as good as the Warnings Plugin from Jenkins, but hey, it's something...
I'm sure, eventually someone/AI will figure this out again and everyone will gush how great that new thing is that actually parses the logs and lets you jump directly to the source...
Don't get me wrong: Jenkins was and probably still is horrible. I don't want to go back. However, it had some pretty good features I still miss to this day.
My browser can handle tens of thousands of lines of logs, and has Ctrl-F that's useful for 99% of the searches I need. A better runner could just dump the logs and let the user take care of them.
Why most web development devolved into a React-like "you can't search for what you can't see" is a mystery.
I start with a Makefile. The Makefile drives everything. Docker (compose), CI build steps, linting, and more. Sometimes a project outgrows it; other times it does not.
But it starts with one unitary tool for triggering work.
I had this fight for some years in my present work and was really nagging in the beginning about the path we were getting into by not allowing the developers to run the full (or most) of the pipeline in their local machines… the project decided otherwise and now we spend a lot of time and resources with a behemoth of a CI infrastructure because each MR takes about 10 builds (of trial and error) in the pipeline to be properly tested.
Yes, there will always be special exemptions: they suck, and we suffer as developers because we cannot replicate a prod-like environment in our local dev environment.
But I laugh when I join teams and they say that "our CI servers" can run it but our shitty laptops cannot, and I wonder why they can't just... spend more money on dev machines? Or perhaps spend some engineering effort so they work on both?
In my experience at work. Anything that demands too much though, collaboration between teams and enforcing hard development rules, is always an unachievable dream in a medium to big project.
Note, that I don't think it's technically unachievable (at all). I just accepted that it's culturally (as in work culture) unachievable.
My experience has been that the problems in CI systems come from exactly these differences “works on my machine” followed by “oops, I guess the build machine doesn’t have access to that random DB”, or “docker push fails in our CI environment because credentials/permissions, but it works when I run it just on my machine”
It's just that management don't see it as worth it, in terms of development cost and limitations it would introduce in the current workflow, to enable the developers to do that.
What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.
The only sane use for Makefiles is running a few simple commands in independent targets, but do you really need make then?
(The argument that "everyone has it installed" is moot to me. I don't.)
I use Fastlane extensively on mobile, as it reduces boilerplate and gives enough structure that the inherent risk of depending on a 3rd-party is worth it. If all else fails, it's just Ruby, so can break out of it.
I see the appeal of GitHub for sharing open source - the interface is so much cleaner and easier to find all you are looking for (GitLab could improve there).
But for CI/CD GitHub doesn’t even come close to GitLab in the usability department, and that’s before we even talk about pricing and the free tiers. People need to give it a try and see what they are missing.
#git --clone [URL]
These tool fails are as a consequence of a failure of proper policy.
Tooling and Methodology!
Here’s the thing: build it first, then optimize it. Same goes for compile/release versus compile/debug/test/hack/compile/debug/test/test/code cycles.
That there is not a big enough distinction between a development build and a release build is a policy mistake, not a tooling ‘issue’.
Set things up properly and anyone pushing through git into the tooling pipeline are going to get their fingers bent soon enough, anyway, to learn how the machine mangles digits.
You can adopt this policy of environment isolation with any tool - it’s a method.
Tooling and Methodology!
For what boils down to a personal take, light on technicalities, this reads like uncannily impersonal, prolonged attempt at dramatic writing.
If you believe the dates in this blog, it's totally different in tone, style, and wording to a safely distant 2021 post (https://www.iankduncan.com/personal/2021-10-04-garbage-in-ne...).
It made me feel paranoid just in about three paragraphs. I apologize to the author if I'm wrong but we all understand what my gut tells me.
It's fantastic for simple jobs, I use it for my hobbyist projects because I just need 20 to 30 lines to build and deploy a web build.
Just because a bike isn't good for traveling in freezing weather doesn't mean no one should own a bike.
Pick the right tool for the job.
Plus CI/CD is the boring part. I always imagined GH Actions as a quick and somewhat sloppy solution for hobbyist projects.
Not for anything serious.
> Every CI system eventually becomes “a bunch of YAML.” I’ve been through the five stages of grief about it and emerged on the other side, diminished but functional.
> I understand the appeal. I have felt it myself, late at night, after the fourth failed workflow run in a row. The desire to burn down the YAML temple and return to the simple honest earth of #!/bin/bash and set -euo pipefail. To cast off the chains of marketplace actions and reusable workflows and just write the damn commands. It feels like liberation. It is not.
Ah yes, misery loves company! There's nothing like a good rant (preferably about a technology you have to use too, although you hate its guts) to brighten up your Friday...
What's the accepted way to copy these into your own repo so you can make sure attackers won't update the script to leak my private repo and steal my `GITHUB_TOKEN`?
One thing people will say is to pin the commit SHA, so don't do "uses: randomAuthor/some-normal-action@v1", instead do "uses: randomAuthor/some-normal-action@e20fd1d81c3f403df57f5f06e2aa9653a6a60763". Alternatively, just fork the action into your own GitHub account and import that instead.
However, neither of these "solutions" work, because they do not pin the transitive dependencies.
Suppose I pin the action at a SHA or fork it, but that action still imports "tj-actions/changed-files". In that case, you would have still been pwned in the "tj-actions/changed-files" incident [2].
The only way to be sure is to manually traverse the dependency hierarchy, forking each action as you go down the "tree" and updating every action to only depend on code you control.
In other package managers, this is solved with a lockfile - go.sum, yarn.lock, ...
[1] https://nesbitt.io/2025/12/06/github-actions-package-manager...
[2] https://unit42.paloaltonetworks.com/github-actions-supply-ch...
* Workflows are only registered once pushed to main, impossible to test the first runs in a branch.
* MS/GH don't care much about GHES as they do github.com, I think they'd like to see it just die. Massive lack of feature parity.
* Labels: If any of your workflows trigger from a label, they ALL DO. You can't target labels only to certain workflows, they all run and then cancel, polluting your checks.
* Deployments: What is a deployment even doing? There is no management to deploy.
* Statefulness: No native way to store state between runs in the same workflow or PR, you would think you could save some sort of state somewhere but you have to manage it all yourself with manifests or something else.
I can go on
We're running GitHub Actions. It's good. All the real logic is in Nix, and we mostly use our own runners. The rest of the UI that GitHub Actions provides is very nice.
We previously used a CI vendor which specialised in building Nix projects. We wanted to like it, but it was really clunky. GitHub Actions was a significant quality of life improvement for us.
None of my colleagues have died. GitHub Actions is not killing my engineering team at any rate.
- Ubuntu useradd command causes 30s+ hang [1]
- Ubuntu: sudo -u some-user unexpectedly ends up with environment variables for the runner [2]
They told you why it takes so long no? the runners come by default with loads of programming languages installed like Rust, Haskell, Node, Python, .Net etc so it sets all that up per user add.
I would also question why your adding users on an ephemeral runner.
We use runners for things that aren't quite "CI for software source code" that does some "weird" stuff.
For instance, we require that new developer system setup be automated - so we have a set of scripts to do that, and a CI runner that runs on those scripts.
Don't know exactly what your doing but others(myself included) are using Mise or Nix on a per project basis to automate the development environment setup and that works well on GitHub Actions.
But I don't think useradd taking 30's on GitHub Actions is a bug or something they need to fix, they've explained why. Unsure about the sudo issues, did not read it carefully.
Oh we don't even run it in applications' CI, the environment automation is an entirely separate CI workflow. The intention isn't consistency between dev/CI, the environment automation CI effectively just serves to ensure that the automations actually run without error, and adds some explicit responsibility for anyone who's adding a new dependency.
> But I don't think useradd taking 30's on GitHub Actions is a bug or something they need to fix, they've explained why. Unsure about the sudo issues, did not read it carefully.
Yeah, agreed. Tangential, our dev setup CI is fairly slow, which tends to be fine - it runs a couple orders of magnitude less frequently than our app CI.
I have one job that runs a shell script that runs tests, a second one that builds and pushes the docker image, and a third one that triggers CD.
Could it be faster? Yes. Could the log viewer be better? Yes. Could the configuration file format be better? Yes. Could the credentials work better? Yes.
However they're well integrated with GitHub (including GHCR), work well and are affordable.
But also, CI should be the last line of defense, not the first line.
If your system is not byzantine, you should be able to run almost all your tests locally and not need to boot a cloud machine that has to be setup from scratch and deal with all the overhead in your core loop.
Having a build system that knows what tests need to be run helps here since you're no longer just throwing compute at the problem.
Our scenario: relatively simple monorepo, lots of docker, just enough bash, trunk-based dev strategy. It's great for that.
We're running a self-hosted GitLab -> hosted GitHub migration at my company (which to me feels a downgrade), and without LLMs I would have spent weeks just researching syntax for how to implement the requirements I had.
I asked Claude to simply "translate these GL templates to GH actions, I want 1 flow for this, 1 flow for that, etc" and it mostly worked. Then in the repos I link the template and ask Claude to write the workflow that uses the template with the correct inputs. I think I saved maybe 3 months worth of coding and debugging workflows. Besides maybe picking slightly outdated actions (e.g. action@v4 instead of action@v6), 95% of the work was ok, and I had to tweak a couple things afterwards.
Commit with one character YAML difference? Check.
Commit with 2-3 YAML lines just to add the right logging? Check.
Wait 5+ minutes for a YAML diff to propagate through our test pipeline for the nth time today? .. sigh .. check
BUT, after ironing all these things out (and running our own beefy self-hosted runner which is triggered to wake up when there's a test process to snack on), it's .. uh.. not so bad? For now?
Currently evaluating using moonrepo.dev to attempt to efficiently build our code. What I've noticed is (aside from Bazel) it seems a lot of monorepo tools only support a subset of languages nicely. So it's hard to evaluate fairly as language support limits one's options. I found https://monorepo.tools to be helpful in learning about a lot of projects I didn't know about.
Said that - every CI sucks one way or another, Github actions is just good enough to fire up a simple job/automation which seems to be majority of use cases anyway?
I think fully production CI pipelines will always be complicated in one way or another (proper catching alone is a challenge on it's own); I really need to check out woodpeckerci (drone ci fork) tho as I had good memories about droneci, but possibly it because I was younger back then xd
I have to admit, I have limited experience with GitHub Actions though. My benchmark is GitLab mainly.
> With Buildkite, the agent is a single binary that runs on your machines.
Yes, and so it is for most other established CI systems with differing variance in orchestrator tooling to spawn agents on demand on cloud providers or Kubernetes. Isn't that the default? Am I spoiled?
> Buildkite has YAML too, but the difference is that Buildkite’s YAML is just describing a pipeline. Steps, commands, plugins. It’s a data structure, not a programming language cosplaying as a config format. When you need actual logic? You write a script. In a real language. That you can run locally. Like a human being with dignity and a will to live.
Again, isn't that the default with modern CI tools? The YAML definition is a declarative data structure, that let's me represent which steps to execute under which conditions. That's what I want from my CI tooling, right? That's why declarative pipelines are what everyone's doing right now and I haven't really heard a lot of people wanting to implement the orchestration of their entire pipeline imperatively instead and run them on a single machine.
But that's where you'll run into limitations pretty soon with Buildkite. You have `if` conditionals, but they're quite limited. You finally have `if_changed` since a few months, which you can use to run steps only if the commit / PR / tag contains changes to certain file globs, but it's again quite rudimentary. Also, you can't combine it with `if` conditionals, so you can't implement a full rebuild independent of file changes - which should be a valid feature, e.g. nightly or on main branches.
The recommended solution to all that:
> Dynamic Pipelines > In Buildkite, pipeline steps are just data. You can generate them.
To me, that's the cursed thing about Buildkite. You start your pipeline declaratively, but as soon as you branch out of the most trivial pipelines, you'll have to upload your next steps imperatively if a certain condition is met. Suddenly you'll end up with a Frankensteinian mess that looks like a declarative pipeline declaration initially, but when you look deeper you'll find a bunch of 20+ bash scripts that upload more pipeline fragments from Heredocs or other YAML files conditionally and even run templating logic on top of them. You want to have a mental model on what's happening in your pipeline upfront? You want to model dependencies between steps that are uploaded under different conditions somewhere scattered through bash scripts? Good luck with that.
I really don't see how you can market it as a feature, that you make me re-implement CI basics that other tools just have and even make me pay for it.
And I also don't see how that is more testable locally than a pipeline that's completely declared in YAML. Especially when your scripts need to interact with the buildkite-agent CLI to download artifacts, meta-data or upload artifacts, meta-data and more pipelines.
> I’ll be honest: Buildkite’s plugin system is structurally pretty similar to the GitHub Actions Marketplace. You’re still pulling in third-party code from a repo. You’re still trusting someone else’s work. I won’t pretend there’s some magic architectural difference that makes this safe.
Yep it is and I don't like either. I prefer GitLab's approach of sharing functionality and logic via references to other YAML files checked into a VCS. It's way easier to find out what's actually happening instead of tracing down third-party code in a certain version from an opaque market place.
But yes, the log experience and the possibility to upload annotations to the pipeline is quite nice compared to other tools I've used. Doesn't outweigh the disadvantages and headaches I had with it so far though.
---
I think many of the critique points the author had on GitHub Actions can be avoided when just using common sense when implementing your CI pipelines. No one forces you to use every feature you can declare in your pipelines. You can still still declare larger groups of work as steps in your pipeline and implement the details imperatively in a language of your choice. But to me, it's nice to not have to implement most pipeline orchestration features myself and just use them - resulting in a clear separation of concerns between orchestration logic and actual CI work logic.
Very helpful for a monster repo with giant task graph
The actual problem is using a bunch of unportable vendor YAML for literally anything.
Define your entire build + artifact publishing pipeline in something like Bazel, Nix, etc and completely decouple everything from the runner. This allows running it locally and also switching runners extremely easily if one of them is no longer to your liking.
Don't fall prey to the vendor YAML trap.