Python's dynamic typing is one of its core strengths. The low friction allows for rapid development that makes it a popular choice for new developers. However, as projects grow and evolve, the lack of type annotations makes code difficult to understand and maintain. This can lead to unexpected bugs that are hard to track down.
That Python is a dynamically typed language doesn't mean that Python does not have types, it means that types are not known until the code runs. But over the last years static type checking has become more popular. Type annotations provide a Matrix-like view into the code. Once you learn to read type signatures, you'll feel like you have superpowers!
This blog post will explore type annotations in Python, a feature that can improve code readability and catch potential bugs early. We'll show how to use static type checkers and annotate your code with type hints for basic type annotations, type narrowing, structural sub-typing and callables. In Part 2, we will cover more advanced topics such as generics, variadic generics paramspec and overloads.
Table of contents
- What are Type Annotations?
- Why do we need Type Annotations?
- Static Type Checkers
- Installing and Setup
- Basic Type Annotations
- Type Inference
- Type Narrowing
- Static Duck Typing and Protocols
- Functions and Callables
- References
What are Type Annotations?
Type annotations, also known as type hints, are a way to specify the type of a variable, function, or argument in Python.
For example, in the following function, we are specifying that the function
add
takes two arguments x
and y
of type float
as input and returns a
value of type float
as output.
def add(x: float, y: float) -> float:
return x + y
Why do we need type annotations?
Python type annotations offer significant advantages in modern software development:
Early Bug Detection. Type annotations help catch potential type-related issues early in the development process. By identifying errors in the IDE or CI pipeline, they prevent bugs from reaching production, saving time and reducing risks. This is especially crucial for industrial-scale applications where downtime is expensive.
Safer refactoring. Immediate feedback on type mismatches makes code changes smoother and more reliable. Developers can confidently modify code with real-time type checking support.
Reduce the need for unit testing. Type annotations minimize the need for extensive type-checking tests, allowing developers to focus on writing tests that validate core business logic rather than basic type compatibility.
Better code clarity. Unlike traditional docstrings, type annotations are compiler-checked and always in sync with the code. They provide clear, immediate insights into function inputs and outputs, making code more readable and self-documenting.
Example: Type safety in coordinate handling
Let's see how type annotations can prevent common errors using a simple geographic coordinate formatting function.
1. Starting Point: Untyped Code
Consider the following function that formats latitude and longitude:
def get_location_untyped(lat, lon): # Error: Type of parameter "lat" is unknown
return f"Latitude: {lat:.6f}, Longitude: {lon:.6f}"
This code can lead to multiple issues:
- We can call it with incorrect types (e.g.,
str
instead offloat
)
get_location_untyped("59.32", "18.06")
- We can accidentally swap latitude and longitude values
get_location_untyped(59.32, 18.06)
get_location_untyped(18.06, 59.32)
- Neither issue will be caught until runtime
2. Adding Basic Type Annotations
We can add basic type annotations to the function signature to specify the expected input types and return type.
def get_location_typed(lat: float, lon: float) -> str:
return f"Latitude: {lat:.6f}, Longitude: {lon:.6f}"
The IDE and type checkers can now detect type-related errors before
runtime. (e.g., passing str
instead of float
) at development time
get_location_typed("59.32", "18.06") # Error: "Literal['18.06']" is not assignable to "float"
However, this still doesn't prevent accidentally swapping parameters:
get_location_typed(59.32, 18.06)
get_location_typed(18.06, 59.32)
3. Domain-Driven Type Safety
One feature of domain-driven design is to create distinct types for
values that are semantically different. For example, instead of using a
generic float
type for both a lat
and lon
you would create specific
types like Lat
and Lon
to represent these values. This helps in
making the code more expressive and reducing the possibility of errors.
A less known feature in Python is NewType
from the typing
module.
NewType
is a great option for creating distinct types for values that
are semantically different but share the same underlying type.
While subclassing is an alternative, NewType
offers a more lightweight
approach to type safety with zero runtime overhead.
from typing import NewType # noqa: E402
Lat = NewType("Lat", float)
Lon = NewType("Lon", float)
def get_location(lat: Lat, lon: Lon) -> str:
return f"Latitude: {lat:.6f}, Longitude: {lon:.6f}"
Swapping the parameters will now raise a type error at compile time.
get_location(Lat(59.32), Lon(18.06))
get_location(Lon(18.06), Lat(59.32)) # Error: "Lon" is not assignable to "Lat", "Lat" is not assignable to "Lon"
Static Type Checking
Static type checking occurs without running the program. This is a standard feature in languages like Java and C#, where it is integrated into the compilation phase, and is required for identifying type mismatches and potential errors before an executable can be produced.
In Python, static type checking is optional. You can add type hints to your code, and use a static type checker to catch type errors before running the code. You can think of it as debugging your code up-front.
Static Type Checkers
There are several static type checkers available for Python:
- Pyright by Microsoft is a full-featured, standards-based static type checker for Python.
- MyPY by Dropbox et al. is an optional static type checker for Python that aims to combine the benefits of dynamic (or "duck") typing and static typing.
- Pyre by Facebook is a performant type checker for Python compliant with PEP 484. Pyre can analyze codebases with millions of lines of code incrementally.
- Pytype by Google checks and infers types for your Python code without requiring type annotations.
In this blog post, we will use Pyright, a fast type checker for Python that is written in TypeScript and integrated with Visual Studio Code through the Pylance extension.
Installing and Setup
Enable Pylance in Visual Studio Code.
If you are using the Python extension for Visual Studio Code, it will automatically install Pylance.
- Configure the type checking level in your
pyproject.toml
file:
[tool.pyright] typeCheckingMode = "strict" # Options: "off", "basic", "standard", "strict"
Basic Type Annotations
For basic type annotations, we will cover annotating primitive types, container types and simple functions.
Primitive types
These types represent single values rather than collections. Below are examples of the most common primitive type annotations:
from typing import Literal
speakers: int = 2
talk: str = "Python type annotations"
active: bool = True
pi: float = 3.14159
number: int | None = 42
status: Literal["active", "inactive"] = "active"
# .. is the same as
status: Literal["active"] | Literal["inactive"] = "active"
Container types
A container is a type that can hold multiple values, such as list
, tuple
, set
, and
dict
. The inner type of the container must also be annotated. For example, a list of
integers can be annotated as list[int]
.
Below we can see that attempting to add incompatible values to our containers results in errors:
xs: list[int] = [1, 2, 3]
xs.append("2") # Error: "Literal['2']" is not assignable to "int"
ts: tuple[int, str] = (1, "test")
us: set[int | str] = {1, 2, 3, "test"}
vs: dict[str, int] = {"one": 1, "two": 2}
vs["one"] = 2
vs["three"] = "three" # Error: "Literal['three']" is not assignable to "int"
Annotating Functions
Function annotations in Python allow us to specify both parameter types and return
types. The syntax uses colons :
for parameter annotation and an arrow ->
for the
return type annotation.
Here's an example with a divide
function that accepts two float
parameters, and returns a union type of float
or None
, this is also
called an optional type:
def divide(a: float, b: float) -> float | None:
if b == 0:
return None
return a / b
When working with functions that return union types, the type checker
will prevent us from directly using the result
since it might be None
. For example, the following code will flag an
error:
from typing import reveal_type # noqa: E402
result = divide(10, 20)
reveal_type(result) # Type of "result" is "float | None"
d = result + 1 # Error: Operator "+" not supported for "None"
To safely use the result, we need to perform type narrowing through conditional checks.
There is a specific section that will cover type narrowing in
more detail, showing various ways to safely work with union types. For now, we will
ensure that the result is a float
before we can use it in an arithmetic operation.
if result is not None:
d = result + 1 # OK
Note: In this post we will be using the function
reveal_type
to ask the static type checker to reveal the inferred type of its
argument.
Most type checkers support reveal_type()
even if the name is not
imported from typing. However, to avoid runtime errors it should be
imported from the typing module. At runtime, this function prints the
runtime type of its argument and returns the argument unchanged.
Type Inference
Type inference is a powerful feature that allows programming languages to automatically determine variable types without explicit type annotations.
MyPy and Pyright will try to infer the type of unannotated variables and
parameters based on context. Pyright will also try to infer the type of
unannotated return types, while MyPy will interpret unannotated return
types as Any
.
For example, when a variable is assigned a specific value, type checkers can infer a precise type:
from typing import reveal_type # noqa: E402
a = 10
reveal_type(a) # Type of "a" is "Literal[10]"
Complementing Type Inference
Despite its convenience, type inference has limitations. In some cases, type inference is not enough to understand the type of a variable and developers often need to provide additional type information in several scenarios:
- Giving a broader type to an object
The type checker will assume that if we initialize a list with a homogenous set of objects, probably that's what we intended.
xs = [1, 2, 3]
reveal_type(xs) # Type of "xs" is "list[int]"
xs.append("2") # Error: "Literal['2']" is not assignable to "int"
# allow a broader type than the one inferred
xz: list[int | str] = [1, 2, 3]
xz.append("2")
- Restrict new types assigned to a variable
To enforce type consistency throughout an application type annotations can ensure that subsequent value assignments are compatible with the initially declared type.
# Variable type changes dynamically without type annotation
x = 10
reveal_type(x) # Type of "x" is "Literal[10]"
# Redeclare the variable with a different type
x = "10"
reveal_type(x) # Type of "x" is "Literal['10']"
# Annotate a variable to restrict the type
xx: int = 10
reveal_type(xx) # Type of "xx" is "Literal[10]"
xx = "10" # Error: Type "Literal['10']" is not assignable to declared type "int"
- Initialize empty containers
Empty containers are initially typed as Any
, requiring explicit type
specification:
lst_ = []
reveal_type(lst_) # Type of "lst" is "Any"
lst_.append(1) # Error: Type of "append" is partially unknown
# Specify the expected type
lst: list[int] = []
lst.append(1)
- Validating function output types
Type annotations serve as a powerful mechanism for ensuring type correctness, particularly when dealing with external or untyped code. By explicitly defining expected return types, developers can intercept type mismatches during code refactoring, prevent silent type-related errors from spreading through the application and create a robust type verification layer for third-party or legacy functions.
# Third-party function without type annotations
def get_status():
return "active"
# Enforcing type constraints within the application
type Status = Literal["active", "inactive"]
status: Status = get_status()
Type Narrowing
Type narrowing is the act of making the type of a variable more narrow or specific e.g.
that Animal
is actually a Cat
, or that Optional[int]
i.e int | None
in some cases
must be int
(or just None
). This can be done by using special type narrowing
expressions, statements and type guards.
While these checks are performed at runtime, static type checkers also use these constructs for type narrowing during static analysis. By narrowing the type of a variable, we can write more type-safe code and catch errors at compile time rather than at runtime.
In this section, we will also discuss type casting and why it should be avoided.
The following examples will use the divide
function from the previous
section:
def divide(a: float, b: float) -> float | None:
if b == 0:
return None
return a / b
result = divide(10, 20)
Type Narrowing Expressions
There are several built-in expressions in Python that can be used to narrow the type of a variable. These include:
isinstance
- Check if an object is an instance of a class.
from typing import reveal_type # noqa: E402
if isinstance(result, float):
reveal_type(result) # Type of "result" is "float"
else:
reveal_type(result) # Type of "result" is "None"
issubclass
- Check if a class is a subclass of another class.
# pyright does not support giving expressions to isinstance, https://github.com/microsoft/pyright/issues/3565
# so issubclass(type(result), str) will not narrow the type of result
result_type = type(result)
reveal_type(result_type) # Type of "result_type" is "type[float] | type[None]"
if issubclass(result_type, float):
reveal_type(result_type) # Type of "result_type" is "type[float]"
else:
reveal_type(result_type) # Type of "result_type" is "type[None]"
callable
- Check if an object is callable.
from collections.abc import Callable # noqa: E402
def factory() -> Callable[[float, float], float | None] | None:
return divide
divide_ = factory()
reveal_type(divide_) # Type of "divide_" is "((float, float) -> (float | None)) | None"
if callable(divide_):
reveal_type(divide_) # Type of "divide_" is "(float, float) -> (float | None)"
else:
reveal_type(divide_) # Type of "divide_" is "None"
is
- Check if two objects are the same.
if result is not None:
reveal_type(result) # Type of "result" is "float"
else:
reveal_type(result) # Type of "result" is "None"
Type Narrowing Statements
There are several built-in statements in Python that can be used to narrow the type of a variable. These include:
if
- Check if a value is truthy.In the previous example, we use an
if
statement to check if result is notNone
. This narrows the type of result within theif
block tofloat
, allowing us to safely perform operations that require afloat
type. If result isNone
, theelse
block handles that case separately.assert
- Assert if a value is truthy.An
assert
statement will be executed at runtime, and if the condition fails, anAssertionError
is raised.
assert result
reveal_type(result) # Type of "result" is "float | int"
match
- Check a value against a series of patterns.The
match
statement is a new feature in Python 3.10 that allows us to match a value against a series of patterns and execute the corresponding block of code. Thematch
statement can be used to narrow the type of a variable based on the pattern that matches the value. Using thematch
statement is essentially a more concise way of writingisinstance
checks, but it can be more readable and ensures that the check is exhaustive so that no case if left unchecked.
result = divide(10, 20)
match result:
case float(f) | int(f):
reveal_type(f) # Type of "f" is "float | int"
case None:
reveal_type(result) # Type of "result" is "None"
User Defined Type Guards
We can add our own type guards for custom types as well. Type guards are special
functions that help the type checker narrow down the type of a variable based on runtime
checks. They return a boolean value and have a special return type TypeGuard
, which
indicates to the type checker that the function is a type guard and can be used to
narrow the type of a variable to the specified type.
Type guards are functional at runtime, affecting the program's flow based on their checks, but they are also used by static type checkers to narrow the types during static analysis.
In the following example, we define a type guard all_int
that checks if all elements
in a list are of type int
. This allows the type checker to narrow the type of the list
based on the result of the check.
from typing import Any, TypeGuard # noqa: E402
def all_int(xs: list[Any]) -> TypeGuard[list[int]]:
return all(isinstance(x, int) for x in xs)
xs = [1, 2, 3, "test"] # list[int | str]
if all_int(xs):
reveal_type(xs) # Type of "xs" is "list[int]"
else:
reveal_type(xs) # Type of "xs" is "list[int | str]"
Type Casting
Type casting is used to explicitly specify a different type for a variable. Using cast
doesn't actually change the type of a variable at runtime; it only informs the type
checker to treat the variable as a different type without performing any runtime type
conversion.
from typing import cast # noqa: E402
def process_data(data: object) -> str:
# We might know this is actually a string, but the type checker doesn't
return cast(str, data).upper()
# Usage
result = process_data("hello")
reveal_type(result) # Type of "result" is "str"
It should be avoided if possible, since it can hide mistakes and lead to runtime errors.
a = 1
b = cast(str, a)
reveal_type(b) # Type of "b" is "str"
# cast is omitted in runtime, so this will raise a TypeError
print(b + "c") # TypeError: unsupported operand type(s) for +: 'int' and 'str'
Static Duck Typing and Protocols
Python is well-known for its duck typing, a programming approach that focuses on what an object can do rather than what it is. If an object implements the methods and attributes we need, we can work with it regardless of its class hierarchy. This flexibility is great, but it raises an interesting challenge:
How do we combine duck typing's dynamic nature with static type annotations?
Let's explore this concept with an example:
We will define two classes Rabbit
and Fox
which share a common
method called feed:
class Rabbit:
def run(self) -> str:
return "Rabbit is running"
def feed(self) -> str:
return "Rabbit is eating"
class Fox:
def say(self) -> str:
return "Ring-ding-ding-ding!"
def feed(self) -> str:
return "Fox is eating"
And a function that takes an animal parameter and calls its
feed
method:
def care_for_animal_untyped(animal): # Error: Type of parameter "animal" is unknown
animal.feed() # Error: Type of "feed" is unknown
...
care_for_animal_untyped(Rabbit())
care_for_animal_untyped(Fox())
How can we type the animal
parameter?
We have several options, but each has drawbacks:
- Creating an
Animal
base class would limit the flexibility of duck typing. - Using
Union[Rabbit, Fox]
would restrict us to specific types. - Using
Any
would disable type checking entirely, allowing objects without a feed method.
A better approach is to use Protocols, which enable static duck typing (also known as structural subtyping). Protocols let us define a set of methods that a class must implement without requiring inheritance from a base class.
To create a protocol, we use the Protocol
base class from the typing
module.
In the following example, we define a CareFor
protocol that requires
the implementation of a feed
method:
from typing import Protocol, runtime_checkable # noqa: E402
@runtime_checkable
class CareFor(Protocol):
def feed(self) -> str: ...
def care_for_animal(animal: CareFor):
animal.feed()
...
care_for_animal(Rabbit())
care_for_animal(Fox())
The type checker ensures that the animal
parameter in the
care_for_animal
function will accept any class that has a method
feed
. Trying to use a class without the required methods will raise a
type error:
# This would fail type checking
class Dog:
"""A class that doesn't implement the CareFor protocol."""
def bark(self) -> str:
return "Woof!"
care_for_animal(Dog()) # Error: Dog doesn't implement CareFor protocol
Note: The use of the @runtime_checkable
decorator is required to enable runtime
type checking using isinstance
. Without it, the CareFor
protocol would not be
recognized as a valid type at runtime:
assert isinstance(Rabbit(), CareFor)
You can use Protocol types just like any other type annotation:
animal: CareFor = Rabbit()
animals: list[CareFor] = [Rabbit(), Fox()]
Protocols are meant to define a set of methods and attributes that a class must implement, but they are not meant to be instantiated themselves:
animal = CareFor() # Error: Cannot instantiate protocol class "CareFor"
Built-in Protocols
The typing
module includes several protocol classes that represent
common Python interfaces. Let's take a look at a few of them:
Iterable[T]
: implements__iter__
method.
In the example below, the class BookCollection
contains an __iter__
method, which means it adheres to the iterable protocol and can be used
wherever Iterable[T]
is expected, such as in a for loop.
Iterable[T]
is a generic type. Generics are covered in more detail in
Part 2 of this series.
class BookCollection:
def __init__(self):
self.books: list[tuple[str, str]] = []
def add_book(self, title: str, author: str):
self.books.append((title, author))
def __iter__(self):
return iter(self.books)
library = BookCollection()
library.add_book("1984", "George Orwell")
for title, author in library:
print(f"{title} by {author}")
Container[T]
: implements__contains__
method.
In the example below, the class TagCollection
contains a
__contains__
method, which means it adheres to the container protocol
and can be used wherever Container[T]
is expected, such as in the in
operator.
Container[T]
is a generic type. Generics are covered in more detail in
Part 2 of this series.
class TagCollection:
def __init__(self):
self.tags: set[str] = set()
def add_tag(self, tag: str):
self.tags.add(tag.lower())
def __contains__(self, item: str):
return item.lower() in self.tags
tags = TagCollection()
tags.add_tag("Python")
tags.add_tag("Programming")
print("python" in tags) # True
print("Java" in tags) # False
SupportsFloat
: implements__float__
method.
In the example below, the class Temperature
contains a __float__
method, which means it adheres to the SupportsFloat
protocol and can
be used wherever SupportsFloat
is expected, such as in the float()
function.
from typing import SupportsFloat # noqa: E402
class Temperature:
def __init__(self, celsius: float):
self._celsius = celsius
def __float__(self) -> float:
return self._celsius
def celsius_to_kelvin(celcius: SupportsFloat) -> float:
return float(celcius) + 273.15
temp = Temperature(25.0)
kelvin = celsius_to_kelvin(temp)
Protocol Inheritance
Protocols can be extended like regular classes, but with an important distinction:
Just inheriting from an existing
Protocol
creates a regular class that implements theProtocol
.To create a new
Protocol
, you must explicitly includeProtocol
in the inheritance.
class AdvancedCare(CareFor, Protocol):
"""Protocol for animals requiring advanced care."""
def groom(self) -> str: ...
def exercise(self) -> str: ...
Functions and Callables
We have already seen that we can annotate functions with type hints. These annotations ensure both proper function usage and correct handling of the return value.
Python functions can take many forms and perform a variety of tasks. You can create
functions that accept other functions as inputs, functions that produce new functions as
outputs, and functions that handle a flexible number of arguments including default
parameters, variadic arguments (*args
) and keyword arguments (**kwargs
). Functions can
also act as generators using yield
or run asynchronously using async
and await
.
In addition, Python also includes a broader concept called "callables" - essentially
anything you can execute using parentheses. This category encompasses regular functions,
class methods, and classes that implement the __call__
method. For more complex
scenarios, you can define callable protocols, that can be used to specify precise
signatures for these callable objects.
Basic Functions
Let's review how to annotate basic functions.
Below is a simple function that adds two integers. The type hints a: int
and b: int
specify that both parameters must be integers, while -> int
indicates that the
function returns an integer.
from typing import reveal_type
def sum_two(a: int, b: int) -> int:
return a + b
result = sum_two(10, 20)
reveal_type(result) # Type of "result" is "int"
Variadic Arguments
Variadic arguments are variable-length arguments often referred to as
*args
in Python, which allow us to pass a variable number of positional
arguments to the function.
Annotate *args with the same type
Let's take a look at how we can annotate variadic arguments of the same type.
In the example below, we have a function sum_all
that takes a variable
number of integers and returns their sum:
def sum_all(*args: int) -> int:
reveal_type(args) # Type of "args" is "tuple[int, ...]"
return sum(args)
sum_all(1, 2, 3)
sum_all(1, 2, 3, 4, 5)
The type of the args
variable is a tuple with an ellipsis ...
at the end. A tuple annotated as tuple[T, ...]
means that all the
elements are of the same type, in this case the args
variable is a
tuple of int
s.
Annotate *args with different types
To annotate variadic arguments of different types, we can define a
predefined tuple that contains the different types, and we can use it
with the star *
operator to unpack it. In this example, calling foo
with only one argument will raise an error:
type Args = tuple[int, str, float]
def foo(*args: *Args) -> None:
reveal_type(args) # Type of "args" is "tuple[int, str, float]"
...
foo(42, "test", 10.3)
foo(42) # Error: Arguments missing for parameters "args[1]", "args[2]"
Note: Defining a predefined tuple will restrict the number of arguments that can be passed to the function. To allow for a variable number of arguments of different types, we can use Variadic Generics. We will cover this in more detail in Part 2.
Keyword Arguments
Keyword arguments are also variable-length arguments, often referred to
as **kwargs
in Python, which allow us to pass a variable number of
arguments by name in any order, and to specify default values for the
arguments.
Annotate **kwargs with the same type
When annotating kwargs
with a single type, for example int
, the
real type of the kwargs
parameter is dict[str, int]
, where the keys
are always str
representing the parameter names:
def sum_integer_kwargs(**kwargs: int) -> int:
reveal_type(kwargs) # Type of "kwargs" is "dict[str, int]"
return sum(kwargs.values())
sum_integer_kwargs(a=1, b=2, c=3)
Annotate **kwargs with different types
To annotate kwargs
with different types, we can use TypeDict
from the typing
module to specify the name and the type of each of the keyword arguments:
from typing import NotRequired, TypedDict, Unpack # noqa: E402
class Person(TypedDict):
name: str
age: int
is_student: bool
address: NotRequired[str] # optional parameter
def greet(**kwargs: Unpack[Person]) -> None:
reveal_type(kwargs) # Type of "kwargs" is "Person"
...
greet(name="John", age=20, is_student=True)
greet(name="John", age=20) # Error: Argument missing for parameter "is_student"
Note:
We cannot use the
**
to unpack aTypeDict
, instead we need to useUnpack
from thetyping
module.TypeDict
does not support assigning default values. To annotatekwargs
with default values we can use Callback Protocols which we will cover in the next section.Similarly as with
*args
, defining aTypedDict
will restrict the number of arguments that can be passed to the function. To annotatekwargs
with a variable number of arguments of different types, we can use Parameter Specification. We will cover this in more detail in Part 2.
What is the benefit of unpacking typed tuples and dictionaries instead of just annotating each of the arguments as normal positional and keyword arguments?
The main benefit might not be obvious at first, but it allows us to define a data model outside the function. This way, the function can depend on the data model instead of needing to know about the specific arguments.
In Part 2, we will see that we can combine tuple unpacking with variadic generics to avoid limiting the number of arguments. For example, the first argument can be pinned to a specific type, while the rest of the arguments can be of any type.
Callables
We can also annotate callables. This is useful, for example, for functions that take
other functions as arguments, or functions that return functions. A callable can be
annotated using the Callable
form, which takes a list of argument types and a return
type. For instance, Callable[[int],str]
represents a function that takes an int
and
returns a str
.
In the example below, we have a mapper
function that converts an int
to a str
. We
also have a map
function that takes a mapper
function with a list of int
s, and
returns a list of str
s. The map
function applies the mapper to each element in the
list, converting each int
to a str
:
from collections.abc import Callable # noqa: E402
def mapper(number: int) -> str:
return str(number)
def map(
mapper: Callable[[int], str],
source: list[int],
) -> list[str]:
return [mapper(x) for x in source]
xs = [1, 2, 3]
ys = map(mapper, xs)
reveal_type(ys) # Type of "ys" is "list[str]
Lambda Functions
We can also use lambdas to define the mapper
function. Using lambdas
can make the code more compact since you don't need to define a separate
function, and naming such a function can sometimes be challenging.
However, it is not possible to annotate lambda parameters with type hints. The type checker will have to infer the type of the lambda from the context.
zs = map(lambda x: str(x), xs)
reveal_type(lambda x: str(x)) # Type of "lambda x: str(x)" is "(x: Unknown) -> str" # Error: Argument type is unknown
reveal_type(zs) # Type of "zs" is "list[str]"
You can always try to help the type checker by providing a type annotation for the variable. However, note that some linters will warn against assigning lambda functions to a variable.
# Warning: Do not assign a `lambda`expression, use a `def`
mapper_: Callable[[int], str] = lambda x: str(x) # noqa: E731
Callback Protocols
One limitation with the Callable
form is that it doesn't allow us to specify variadic
arguments like *args
or **kwargs
. Therefore, we cannot specify optional parameters
with default values.
In the example below, we have a function callback
that takes two arguments, but we
have no way of specifying that the second argument is optional. As a result, we get an
error when we try to call the cb
function with only one argument:
def callback(a: int, b: int | None = None) -> int:
return a + (b or 20)
def use_callback(cb: Callable[[int, int | None], int]) -> int:
return cb(10) # Error: Expected 1 more positional argument
use_callback(callback)
To address these issues, we can use callback protocols. A callback
protocol is a Protocol
class that defines a __call__
method. This
method can have any number of positional or keyword parameters with or
without default values. The type checker will check that the __call__
method matches with the signature of the function that uses such a
callback protocol.
In the example below, we define a KeyboardEvent
protocol with a __call__
method that takes a keycode
parameter and an optional completed
keyword
parameter. The on_event
function matches this signature. The
KeyboardEvent
protocol ensures that any function passed to the
register
function adheres to this signature:
from typing import Protocol # noqa: E402
class KeyboardEvent(Protocol):
def __call__(self, keycode: int, *, completed: bool = False) -> None: ...
def on_event(keycode: int, *, completed: bool = False) -> None:
# Handle the event
...
def register(event_callback: KeyboardEvent) -> None:
...
event_callback(keycode=10)
...
event_callback(keycode=10, completed=True)
register(on_event)
This concludes part 1 of our exploration into type annotations in Python. We've covered the basics of type annotations, type narrowing, structural sub-typing, and callables. By using these techniques, you can improve code readability, catch potential bugs early and ensure type safety in your Python projects.
In Part 2 of this series, we will build on these fundamentals and explore more advanced features such as generics, variadic generics, paramspec, and overloads.