https://about.scarf.sh/post/python-wheels-vs-eggs
The metadata problem is related to the problem that pip had an unsound resolution algorithm based on "try to resolve something optimistically and hope it works when you get stuck and try to backtrack".
I did a lot of research along the line that led to uv 5 years ago and came to the conclusion that installing out of wheels you can set up a SMT problem the same way maven does and solve it right the first time. They had a PEP to publish metadata files for wheels in PyPi but I'd built something before that could suck the metadata out of a wheel with just 3 http range requests. I believed that any given project might depend on a legacy egg and in those cases you can build that egg into a wheel via a special process and store it in a private repo (a must for the perfect Python build system)
I get that Python is, strictly speaking, an older language. But, it isn't like these are at all new considerations.
But that is by choice, I as a user, am forced to debug this pile of garbage whenever things go wrong, so in a way it's even worse for users. It's a running joke in the machine learning community that the hard part about machine learning is having to deal with python packages.
Range requests are used by both uv and pip if the index supports it, but they have to make educated guesses about how reliable that metadata is.
The main problem are local packages during development and source distributions.
There is a need for a complete answer for dev and private builds, I'll grant that. Private repos like we are used to in maven would help.
python -m pip install --user <package_name>
and I now have a local installation that I can use for testing.
It’s also a step not needed by most other ecosystems.
Potentially, perhaps. But it's certainly not for the cases where I use it: a pure python package, whose dependencies are already installed and are not changing (only the package itself is). Under those conditions, the command line I gave takes a couple of seconds to run.
From what I can gather, most other ecosystems don't even have the problem under discussion.
These days you need toml to parse pyproject.toml, and there's not a parser in the Python standard library for TOML: https://packaging.python.org/en/latest/guides/writing-pyproj...
pip's docs strongly prefer project.toml: https://pip.pypa.io/en/stable/reference/build-system/pyproje...
Over setup.py's setup(,setup_requires=[], install_requires=[]) https://pip.pypa.io/en/stable/reference/build-system/setup-p...
Blaze and Bazel have Skylark/Starlark to support procedural build configuration with maintainable conditionals
Bazel docs > Starlark > Differences with Python: https://bazel.build/rules/language
cibuildwheel: https://github.com/pypa/cibuildwheel ;
> Builds manylinux, musllinux, macOS 10.9+ (10.13+ for Python 3.12+), and Windows wheels for CPython and PyPy;
manylinux used to specify a minimum libc version for each build tag like manylinux2 or manylinux2014; pypa/manylinux: https://github.com/pypa/manylinux#manylinux
A manylinux_x_y wheel requires glibc>=x.y. A musllinux_x_y wheel requires musl libc>=x.y; per PEP 600: https://github.com/mayeut/pep600_compliance#distro-compatibi...
> Works on GitHub Actions, Azure Pipelines, Travis CI, AppVeyor, CircleCI, GitLab CI, and Cirrus CI;
Further software supply chain security controls: SLSA.dev provenance, Sigstore, and the new PyPI attestations storage too
> Bundles shared library dependencies on Linux and macOS through `auditwheel` and `delocate`
delvewheel (Windows) is similar to auditwheel (Linux) and delocate (Mac) in that it copies DLL files into the wheel: https://github.com/adang1345/delvewheel
> Runs your library's tests against the wheel-installed version of your library
Conda runs tests of installed packages;
Conda docs > Defining metadata (meta.yaml) https://docs.conda.io/projects/conda-build/en/latest/resourc... :
> If this section exists or if there is a `run_test.[py,pl,sh,bat,r]` file in the recipe, the package is installed into a test environment after the build is finished and the tests are run there.
Things that support conda meta.yml declarative package metadata: conda and anaconda, mamba and mambaforge, picomamba and emscripten-forge, pixi / uv, repo2docker REES, and probably repo2jupyterlite (because jupyterlite's jupyterlite-xeus docs mention mamba but not yet picomamba) https://jupyterlite.readthedocs.io/en/latest/howto/configure...
The `setup.py test` command has been removed: https://github.com/pypa/setuptools/issues/1684
`pip install -e .[tests]` expects extras_require['tests'] to include the same packages as the tests_require argument to setup.py: https://github.com/pypa/setuptools/issues/267
TODO: is there a new one command to run tests like `setup.py test`?
`make test` works with my editor. A devcontainers.json can reference a Dockerfile that runs something like this:
python -m ensurepip && python -m pip install -U pip setuptools
But then still I want to run the tests of the software with one command.Are you telling me there's a way to do an HTTPS Content Range request for the toml file in a wheel for the package dependency version constraints and/or package hashes (but not GPG pubkey fingerprints to match .asc manifest signature) and the build & test commands, but you still need an additional file in addition to the TOML syntax pyproject.toml like Pipfile.lock or poetry.lock to store the hashes for each ~bdist wheel on each platform, though there's now a -c / PIP_CONSTRAINT option to specify an additional requirements.txt but that doesn't solve for windows or mac only requirements in a declarative requirements.txt? https://pip.pypa.io/en/stable/user_guide/#constraints-files
conda supports putting `[win]` at the end of a YAML list item if it's for windows only.
Re: optimizing builds for conda-forge (and PyPI (though PyPI doesn't build packages (when there's a new PR, and then sign each build for each platform))) >>41306658
Maybe the solution will be for tools like uv or poetry to warn if dynamic metadata is used and strongly discourage it. Then over time the users of packages that use dynamic metadata will start to urge the package authors to stop using it.
I wouldn’t bet on this one. I know a lot of python package maintainers who would likely rather kill their project than to adapt to a standard they don’t like. For example see flake8’s stance on even supporting pyproject.toml files which have been the standard for years: https://github.com/PyCQA/flake8/issues/234#issuecomment-8128...I know because I’m the one that added pyproject.toml support to mypy 3.5 years ago. Python package developers can rival Linux kernel maintainers for resistance to change.
Which is much longer than the "couple of seconds" I gave for my use case. Yes, if it takes that long, I can see how you would want some alternative.
> Also more painful to debug because the filenames in the stack trace no longer match to what you have open in your editor.\
Why not? If you do a fresh install, everything should match up. It seems like this problem would be more likely with an editable install, if things aren't kept in sync properly.
Absolutely not. The file names in stack traces will be from the site-packages folder in the venv instead of the local checkout.