On that last one, there's a potential bug in the deployment pipeline here – deploys could run simultaneously or some bad luck on runner speed could even see an older version of the code go out after a newer version. Combined with the automated database migrations this could be quite a big problem!
Actions thankfully solved this recently with the `concurrency` key that lets you form a serial queue by a given key such as the branch name.
What happens if there are conflicting migrations on two "parallel" branches?
What happens in you bad luck situation when commits are pushed in rapid succession on the same branch?
GitHub Actions aren't focused on CI, so they are much more useful.
Like...the last time I checked, workflows had no runtime macro for limiting execution to the default branch except explicitly by a specific name, and the closest you could get to generically checking "whatever the default branch is called right now" was either a template workflow that would set some static text for the name at creation that breaks if the default branch name is subsequently changed or a song and dance querying the API and setting an environment variable inside one of the workflow steps and then gating all subsequent steps on the result. This was a long time after they introduced editable default branch names and seems like such an obvious oversight.
Then there are weird quirks like the subshell file system permissions block that requires using sudo if you want to move files around within your repo clone from inside an invoked shell script.
Does that address your need?
Edit with more context: We use CI for deploying to GitLab.com and use resource_group to prevent multiple jobs from running concurrently. What we lack is the ability to prevent multiple pipelines from running concurrently (resource_group is at the job level). It looks like concurrency for actions https://docs.github.com/en/actions/reference/workflow-syntax... can work on groups which is a bit nicer. There is some discussion about making this better for GitLab in https://gitlab.com/gitlab-org/gitlab/-/issues/217522
They also offer triggering workflows with 'workflow_run' based on other workflows, but that only happens on the default main branch. We auto build testing environments on each PR and I'd love to be able to have better workflow management based on branches.
It's cost me hundreds to thousands of dollars to implement nontrivial workflows because of how the YAML is parsed (for example, empty strings when using a secret that has been renamed or removed) and the lack of introspection or debuggability when something goes wrong.
It's gotten to the point where new any new workflows I write are thin wrappers around a single script and I don't import any actions besides actions/checkout (even that has been bug prone, historically).
All that said, it's not like other platforms are better. But they certainly are cheaper and don't have dumb breakages when you need cross platform builds (has upload-artifact been fixed for executables on MacOS yet?)
Not being able to execute it locally has also wasted a lot of time, and people needing to make 50+ changes to the master branch until they get it right.
I've been looking for a way to post regularly without actually having to do it. tweetdeck is nice, but its still more hassle than filling a folder with markdowns.
CI is a script, and the YAML configs for those various services configure the machine type, OS and toolchain. Everything else is contained within the script. Sometimes even toolchain setup is handled by the script.
Not following this model has wasted so much time when migrating services or trying to tweak what CI does.
With a script you can run it locally to ensure it performs the steps desired, leaving the CI “setup” to minimal environment/toolchain debugging.
It's super flexible and you can do literally anything.
However, the workflow referencing the custom action would still be the Actions YAML syntax.
(disclaimer: I work at GH, but not on Actions)
You might also want to take a look at act which allows you to run github actions locally, this is typically how I do actions development:
I'm curious what prevents you from writing your own actions in typescript now?
Even git's early days were well after the last ARPANET days.
Edit: I suppose they just mean 'since time immemorial', but in internets. Perhaps I'm just tired/'whooshed' but I think that was more confusing than they were going for!
Compared to what? ie.. Jenkins, CircleCI, Gitlab's CI/CD...none of them are "focused on CI", and can more or less do anything you want them to. I'm not trying to be argumentative, but I'm having trouble thinking of a CI/CD system that is more(or less) focused on CI than Github actions is - do you have some examples?
I don't use the web editor, but more importantly it can't catch logical errors (missing required with: arguments, secrets that don't exist, environment variable names, etc).
> I'm curious what prevents you from writing your own actions in typescript now?
When I say "I want to write actions in typescript" I mean that I want to specify the entirety of my CI using a typescript program, without any YAML configuration. In particular, the jobs of a workflow themselves.
I have many jobs shared between build/test/release with slightly different triggers and configurations, but the only way to handle this in actions (especially when using imported actions) is by copying/pasting YAML. That wound up being untenable, and it's why I stripped out all action dependencies and wrote the automation to not use them such that all workflows
I've also had use cases for recursive workflows.
Act doesn't cover any of my use cases.
End to end TypeScript from IaC to FaaS.
Workflows are statically typed in that they have event triggers - ideally with an event schema (which we generate for you) - and each "action" has static typing of inputs/outputs (plus additional workflow configuration for reusability). They're defined as code, and can be viewed or edited visually.
It also bundles an event hub, so you can automatically run workflows when events happen in real time. For example, if you want to run a churn flow on signup, create a workflow with a `signup.new` event trigger. The workflows can also coordinate between events, too, so in the churn workflow you can wait for an "interactivity" event from the user for up to 1 day, then time out and run some other flow/logic.
It's workflows, generalised. As if you put Github Workflows, Lambda, Segment, and Zapier in a blender.
If you want early access, you can always reach me at tony [at] inngest.com. I'm rolling out invites every week.
https://docs.gitlab.com/ee/ci/pipelines/schedules.html
Similarly, you can trigger pipelines from various angles https://docs.gitlab.com/ee/ci/triggers/ using the API.
If you are looking to combine it with events on-demand, the webhooks may come in handy. https://docs.gitlab.com/ee/user/project/integrations/webhook...
Agreed, some adhoc actions are project specific, though you can programmatically walk through them in API client code, for example searching for a group and triggering all project's pipelines.
https://python-gitlab.readthedocs.io/en/stable/gl_objects/gr... https://python-gitlab.readthedocs.io/en/stable/gl_objects/pi...
A good exercise is to look at a simple but nontrivial build/test/release automation:
- on pull requests build & test, on tags build/test and release, and once a day release a nightly build.
- cache dependencies globally and build artifacts on each branch for incremental builds
- once daily, clear out artifacts on deleted branches or merged PRs
A script to do this would be around 100 lines and be readable/maintainable (if build/test or release change, it is shared by the different cases!). The script can have a main entrypoint that will dispatch to the runners as needed.
I know it sounds a lot like jenkins, but node has a much better ecosystem than groovy.
Why can't I have a separate interface where I just say "build this Github project, and put the content on this on-prem server/kube cluster/VM/whatever."
Another trick that works well is putting GitHub Actions in an entirely separate repository. There's nothing to stop actions in one repo from checking out code from another - I use that trick quite frequently.
You do have to jump through a few extra hoops to set it up so that code in your actions repo starts running automatically on commits to your main repo, but you can do that with a small action in the main repo that triggers a build in the actions repo.
CI isn't _necessarily_ our current target. As of yet, there's no concept of "failed runs" in the classic CI sense. We highlight failed workflows but will retry actions by default. If things fail, we allow you to retry/edit the data, debug in line, etc.
It's definitely possible to use the failed workflow runs as a CI pipeline, but the UX would need an overhaul. Right now, we do user-event attribution to show you which workflows users are running in your system, their version, and the steps - which is needed for actual, live system workflows vs CI workflows.
Best example of that is here: https://github.com/simonw/covid-19-datasette/blob/main/.gith...
- You can't pull in private dependencies published from other repos (for example, packages published on repo A used as a dependency on repo B) without using a private access token.
- You can't use git pulls from other repos (for example, repo B using `orgname/repoA#123456` as a dependency in package.json) without using a private access token, and it's a pain in the ass to make it work across workflow steps.
- You can't allow Dependabot to run as a trusted user, which makes it impossible to actually use any of the workarounds for the above issues with it.
- You can't create PRs to publish changes across repos (such as automatically keeping some set of files in sync) without using a private access token.
There are other complications, but those are the biggest ones.
Does GitLab have a response planned?
(There's a workaround for the dependabot issue though, use pull_request_target instead and explicitly check out the sha of the branch. Then the run can access the secrets.)
I would also add "you can't rerun single jobs" and "actions can't call other actions" to the list of grievances.
Do you think it would be difficult? Github actions can be boiled down to running shell commands (with a bunch of other handy features), so it's quite versatile. At times it does require you to make workflows which are a bit convoluted but all in all I think it's pretty good.
Testing a pipeline that depends on a merge to a branch, or a specific tag, is troublesome. Easier to just iterate in the mainline until you're ready.
For example, recently when they had to reduce access to GitHub Actions because miners were abusing it.
Where as my local configuration management (Ansible, Puppet etc.) script can always run anywhere and I can even run on my own build VM if I need too.
IME I prefer the YAML files because it forces the "DevOps" guy to put most of the stuff in outside scripts, which can than be tested and run on a local dev machine.
Nothing worse than having to debug a freaking CI runner. I never got why a job should be more complicated than "fetch -> build -> test -> deploy" with maybe a bit of intelligence to handle build triggers and artifact management.
I don't want to deal with a web of groovy scripts with sed hacks everywhere and which involves 10 other jobs who does the same kind of unholy blasphemous hacks and who always break one another.
A graphical development environment and TS library set for something like UML wanted to be for OOP only for everything we need to do and want to do with our codes in any development environment from the enterprise VS group wallet grab setup we still run, to a "redmode" for working from Emacs.
It's not all as bad as XKCD says about creating new standards : the world of code writers is big enough now for many equivalent standards to thrive in coexistence. We're probably still too emotionally scarred from the very recent years when anything that smacks of making standards and arguing betweent them had much more serious consequences than possible today.
I think platforms like nextcloud try to build some workflow engines, but I don't know how far that is and in which format they are implemented. I don't like Zapier, but many people seem to like it.
As for running migrations on the same database from multiple branches, not a good idea. Probably best to decide on the branch that is the release branch (maybe main/master) and only deploy from that.
GitHub Action's, Semaphore's, or even Jenkin's, concurrency primitives are pretty capable so I'd probably go with one of those.
Apart from feeling like they were in that sort of transition, GitLab's docs were fine, better than Circle's, but I find GitHub Actions much clearer.
This give me ideas :-)