Every Python developer is challenged by the size and velocity of the Python ecosystem π€
This post provides clarity with a Hypermodern Python Toolbox - tools that are setting the standard for Python in 2025.
Python 3.11 and 3.12 both brought performance improvements to Python.
We choose 3.11 as the version to use in the Hypermodern Toolbox, as 3.12 is still a bit unstable with some popular data science libraries.
Python 3.11 added better tracebacks - the exact location of the error is pointed out in the traceback. This improves the information available to you during development and debugging.
The code below has a mistake. We want to assign a value to the first element of data
, but the code refers to a non-existent variable datas
:
With pre 3.10 versions of Python, this results in an error traceback that points out that the variable datas
doesn’t exist:
Python 3.11 takes its diagnosis two steps further and also offers a solution that the variable should be called data
instead, and points out where on the line the error occurred:
So much of programming is reading & responding error messages - these improvements are a great quality of life improvement for Python developers in 2025.
The hardest thing about learning Python is learning to install & manage Python. Even senior developers can struggle with the complexity of managing Python, especially if it is not their main language.
uv is a tool for managing different versions of Python. It’s an alternative to using pyenv, miniconda or installing Python from a downloaded installer.
uv can be used to run Python commands and scripts with the Python version specified - uv will download the Python version if it needs to. This massively simplifies the complexity of managing different versions of Python locally.
The command below runs a hello world
program with Python 3.13:
uv is also a tool for managing virtual environments in Python. It’s an alternative to venv or miniconda. Virtual environments allow separate installations of Python to live side-by-side, which makes working on different projects possible locally.
The command below creates a virtual environment with Python 3.11:
You will need to activate the virtual environment to use it with $ source activate .venv/bin
.
uv is also a tool for managing Python dependencies and packages. It’s an alternative to pip. Pip, Poetry and uv can all be used to install and upgrade Python packages.
Below is an example pyproject.toml
for a uv managed project:
Installing a project can be done by pointing uv pip install
at our pyproject.toml
:
Like Poetry, uv can lock the dependencies into uv.lock
:
uv can also be used to add tools, which are globally available Python tools. The command below installs pytest
as tool we can use anywhere:
This will add programs that are available outside of a virtual environment:
Tip - add the direnv tool with a .envrc
to automatically switch to the correct Python version when you enter a directory.
Ruff is a tool to lint and format Python code - it is an alternatives to tools like Black or autopep8.
Ruff’s big thing is being written in Rust - this makes it fast. When used with Black to ensure consistent code style, Ruff covers much of the Flake8 rule set, along with other rules such as isort.
A great way to use Ruff is with the defaults and check everything.
The code below has three problems - we use an undefined variable datas
, it has imports in the wrong place and imports something we don’t use:
Running Ruff in the same directory points out the issues:
Tip - Ruff is quick enough to run on file save during development - your text editor will allow this somehow!
mypy is a tool for enforcing type safety in Python - it’s an alternative to type declarations remaining as only unexecuted documentation.
Recently Python has undergone a similar transition to the Javascript to Typescript transition, with static typing being improved in the standard library and with third party tooling. Statically typed Python is the standard for many teams developing Python in 2025.
mypy_error.py
has a problem - we attempt to divide a string by 10
:
We can catch this error by running mypy - catching the error without actually executing the Python code:
These first errors are because our code has no typing - let’s add two type annotations:
user: dict[str,str]
- user
is a dictionary with strings as keys and values,-> None:
- the process
function returns None.Running mypy on mypy_intermediate.py
, mypy points out the error in our code:
This is a test we can run without writing any specific test logic - very cool!
Static type checking will catch some bugs that many unit test suites won’t. Static typing will check more paths than a single unit test often does - catching edge cases that would otherwise only occur in production.
Tip - Use reveal_type(variable)
in your code when debugging type issues. mypy will show you what type it thinks a variable has.
pydantic is a tool for organizing and validating data in Python - it’s an alternative to using dictionaries or dataclasses.
pydantic is part of Python’s typing revolution - pydantic’s ability to create custom types makes writing typed Python a joy.
pydantic uses Python type hints to define data types. Imagine we want a user with a name
and id
:
We could model this with pydantic - introducing a class that inherits from pydantic.BaseModel
:
A strength of pydantic is validation - we can introduce some validation of our user ids - below checking that the id
is a valid GUID - otherwise setting to None
:
Running the code above, our pydantic model has rejected one of our ids - our omega
has had it’s original ID of invalid
rejected and ends up with an id=None
:
These pydantic types can become the primitive data structures in your Python programs (instead of dictionaries) - making it eaiser for other developers to understand what is going on.
Tip - you can generate Typescript types from pydantic models - making it possible to share the same data structures with your Typescript frontend and Python backend.
Typer is a tool for building command line interfaces (CLIs) using type hints in Python - it’s an alternative to sys.argv or argparse.
We can build a Python CLI with uv and Typer by first creating a Python package with uv, adding typer
as a dependency).
Here we use $ uv init
to create a new project from scratch:
We then add modify the Python file src/demo/__init__.py
to include a simple CLI:
Because we have included a [project.scripts]
in our pyproject.toml
, we can run this CLI with uv run
:
Typer gives us a --help
flag for free:
Tip - you can create nested CLI groups using commands and command groups.
Rich is a tool for printing pretty text to a terminal - it’s an alternative to the monotone terminal output of most Python programs.
One of Rich’s most useful features is pretty printing of color and emojis:
If you are happy with Rich you can simplify your code by replacing the built-in print with the Rich print:
Tipβ-βRich offers much more than color and emojisβ-βincluding displaying tabular data and better trackbacks of Python errors.
Polars is a tool for tabular data manipulation in Python - it’s an alternative to Pandas.
Polars can offer exceptional performance through query optimization. It can also work with larger than memory datasets. It also has a syntax that may prefer to Pandas.
In eager-execution frameworks like Pandas, each data transformation is run without knowledge of what came before and after. By allowing data to be evaluated lazily, Polars can optimize across a series of data transformations.
The example below demonstrates query optimization. Let’s start with a dataset of three columns:
We can then chain operations together and run them in a single optimized query. Below we chain together column creation and aggregation into one query:
Tip - you can use pl.DataFrame.to_pandas()
to convert a Polars DataFrame to a Pandas DataFrame. This can be useful to slowly refactor a Pandas based pipeline into a Polars based pipeline.
Pandera is a tool for data quality of DataFrames- it’s an alternative to Great Expectations or assert statements.
Pandera allows you to define schemas for your data, which can then be used to validate, clean, and transform your data. By defining schemas upfront, Pandera can catch data issues before they propagate through your analysis pipeline.
Let’s create a schema for some sales data. We define column names, types, and data quality checks like if null values are acceptable, numeric upper and lower bound constraints and check accepted values for categorical data:
We can now validate data using this schema:
When we have bad data, Pandera will raise an exception:
Tip - Check decorators can enable custom validation logic beyond simple range and type checks. Custom checks can validate complex business rules or statistical properties of your data.
DuckDB is database for analytical SQL queries - it’s an alternative to SQLite, Polars and Pandas.
Like SQLite, DuckDB is a single-file database. While SQLite are optimized for transactional workloads, DuckDB is specifically designed for analytical queries on structured data.
Let’s create some sample data using both CSV and Parquet formats:
Below we run a SQL query across both formats:
DuckDB shines when working with larger than memory datasets. It can efficiently query Parquet files directly without loading them into memory first.
Tip - Use DuckDB’s EXPLAIN command to understand query execution plans and optimize your queries.
Loguru is a logger - it’s an alternative to the standard library’s logging module and structlog. Loguru builds on top of the Python standard library logging
module. It’s not a complete rethinking, but a few tweaks here and there to make logging from Python programs less painful.
A central Loguru idea is that there is only one logger
. This is a great example of the unintuitive value of constraints - having less logger
objects is actually better than being able to create many. Because Loguru builds on top of the Python logging
module, it’s easy to swap in a Loguru logger for a Python standard library logging.Logger
.
Let’s see how Loguru simplifies logging. First, a basic example:
Logging with Loguru is as simple as from loguru import logger
:
Configuring the logger is all done through a single logger.add
call.
We can configure how we log to std.out
:
The code below configures logging to a file:
Tip - Loguru supports structured logging of records to JSON via the logger.add("log.txt", seralize=True)
argument.
Marimo is a Python notebook editor and format - it’s an alternative to Jupyter Lab and the JSON Jupyter notebook file format.
Marimo offers multiple improvements over older ways of writing Python notebooks:
Marimo also offers the feature of being reactive - cells can be re-executed when their inputs change. This can be a double-edged sword for some notebooks, where changing a call can cause side effects like querying APIs or databases.
Marimo notebooks are stored as pure Python files, which means that:Git diffs are meaningful, and the notebook can be executed as a script.
Below is an example of the Marimo notebook format:
Tip - Marimo integrates with GitHub Copilot and Ruff code formatting.
The 2025 Hypermodern Python Toolbox is: