With love, rye is all vision, philosophy and duct tape. uv is built by a full-time team.
EDIT: I still like how hatch allows for defining multiple envs for a given project with the ability to switch between / run in them by name.
I wonder, if you plan to extend the functionality of building and publishing packages. For example, support for dynamic version (from github) and trusted publishers.
I like the idea of that single-file Python script with inline dependency info construct, but it's probably going to be a bummer in terms of editor experience. I doubt the typical LSP servers will be aware of dependencies specified in that way and so won't offer autocompletion etc for those libraries.
The script dependency metadata _is_ standardized[2], so other LSP servers could support a good experience here (at least in theory).
[1] The Ruff Language Server: https://astral.sh/blog/ruff-v0.4.5
[2] Inline script metadata: https://peps.python.org/pep-0723/
Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.
If you have some other tool manager on your system (e.g. mise) then you can likely install uv through that.
I have been working with python for over 10 years and have standardized my workflow to only use pip & setuptools for all dependency management and packaging [1]. It works great as a vanilla setup and is 100% standards based. I think uv and similar tools mostly shine when you have massive codebases.
Now since Uv is in Rust we'll need a dependency manager manager manager for it on any OS that's not rolling to compile since what rustc is changes every 3 months and breaks forwards compatibility.
I'll check back in in 3 years (3x longer than Astral has so far existed) and see if Uv still exists and has become stable enough to use.
I can't wait to set this up, I'm very confident they'll iron out any remaining bugs or gotchas (e.g. CUDA) quickly.
We've invested quite a bit of effort into finding system Python interpreters though and support for bringing your own Python versions isn't going anywhere.
Anyways, software supply chain security and Python & package build signing and then containers and signing them too
Conda-forge's builds are probably faster than the official CPython builds. conda-forge/python-feedstock//recipe/meta.yml: https://github.com/conda-forge/python-feedstock/blob/main/re...
Conda-forge also has OpenBLAS, blos, accelerate, netlib, and Intel MKL; conda-forge docs > switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base/#swit...
From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :
> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
> We will show how to use this in practice with `rattler-build`
> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.
> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel
Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)
virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.
ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.
e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.
Is there already a way to, as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?
Fwiw I’m building a thing [1] that does this. Current docs suggest Rye but will s/rye/uv/ shortly. It’s basically just some CLI commands and a Hatch/PDM plugin that injects needed stuff at build-time.
Most people not on the bleeding edge use conda, not poetry? But people who are hip use rye and uv? Up until today and now they only use uv if possible?
I'm actually building a system around user-installed plugins. Where there is a UI to search for and install plugins on the fly.
Also one other thing just to double check, it is now very uncool or considered bad practice to use dynamic or flexible types in Python?
The exception is if they have specific dependencies outside the CPython ecosystem - in which case they’ll probably be using conda. Examples of such dependencies include nodejs/cuda/cublas/specific versions of gcc. Webdev generally doesn’t have as many of these dependencies compared to the data world, which is why conda is less popular there.
Speaking in sweeping generalities here: you probably don’t need poetry, uv, or kin - at all. But there’s nothing wrong with choosing to use them if you prefer to do so either.
In the same way, none of the “fancy-pip-replacement” projects will outright obsolete pip or conda. They’re just tools that can work a bit more intuitively for new users and provide a bit of polish/UX value - but their niche fills the exact same role as pip/conda: managing the set of binaries on your PATH.
https://docs.astral.sh/uv/concepts/python-versions/#discover...
https://rye.astral.sh/guide/toolchains/#registering-toolchai...
Given this is a recently accepted standard (PEP 723), why would language servers not start to support features based on it?
Consider an editor feature, e.g. goto-definition. When working in a normal Python environment (global or virtual) the code of your dependencies actually exists on the filesystem. With one of these scripts with inline dependency information, that dependency code perhaps doesn't exist on the filesystem at all, and possibly won't ever until after the script has been run in its special way (e.g. with a `pipx run` shebang?).
https://prefix.dev/blog/uv_in_pixi
Reasons for liking pixi, over e.g. poetry:
- Like poetry, it keeps everything in a project-local definition file and environment
But unlike poetry, it also:
- Can install python itself, even ancient versions
- Can install conda packages
- Is extremely fast