Skip to content

Lifespan Hooks#

Usage example#

Let's imagine that your application uses pydantic as your settings manager.

I highly recommend using pydantic for these purposes, because this dependency is already used at FastStream and you don't have to install an additional package

Also, let's imagine that you have several .env, .env.development, .env.test, .env.production files with your application settings, and you want to switch them at startup without any code changes.

By passing optional arguments with the command line to your code FastStream allows you to do this easily.

Lifespan#

Let's write some code for our example

from pydantic_settings import BaseSettings

from faststream import ContextRepo, FastStream
from faststream.kafka import KafkaBroker

broker = KafkaBroker()
app = FastStream(broker)


class Settings(BaseSettings):
    host: str = "localhost:9092"


@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)
from pydantic_settings import BaseSettings

from faststream import ContextRepo, FastStream
from faststream.confluent import KafkaBroker

broker = KafkaBroker()
app = FastStream(broker)


class Settings(BaseSettings):
    host: str = "localhost:9092"


@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)
from pydantic_settings import BaseSettings

from faststream import ContextRepo, FastStream
from faststream.rabbit import RabbitBroker

broker = RabbitBroker()
app = FastStream(broker)


class Settings(BaseSettings):
    host: str = "amqp://guest:guest@localhost:5672/" 


@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)
from pydantic_settings import BaseSettings

from faststream import ContextRepo, FastStream
from faststream.nats import NatsBroker

broker = NatsBroker()
app = FastStream(broker)


class Settings(BaseSettings):
    host: str = "nats://localhost:4222"


@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)
from pydantic_settings import BaseSettings

from faststream import ContextRepo, FastStream
from faststream.redis import RedisBroker

broker = RedisBroker()
app = FastStream(broker)


class Settings(BaseSettings):
    host: str = "redis://localhost:6379"


@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)

Now this application can be run using the following command to manage the environment:

faststream run serve:app --env .env.test

Details#

Now let's look into a little more detail.

To begin with, we are using a @app.on_startup decorator

@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)

to declare a function that runs when our application starts.

The next step is to declare our function parameters that we expect to receive:

@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)

The env argument will be passed to the setup function from the user-provided command line arguments.

Tip

All lifecycle functions always apply @apply_types decorator, therefore, all context fields and dependencies are available in them

Then, we initialize the settings of our application using the file passed to us from the command line:

@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)

And put these settings in a global context:

@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)
Note

Now we can access our settings anywhere in the application right from the context

from faststream import Context, apply_types

@apply_types
async def func(settings = Context()): ...

As the last step we initialize our broker: now, when the application starts, it will be ready to receive messages:

@app.on_startup
async def setup(context: ContextRepo, env: str = ".env"):
    settings = Settings(_env_file=env)
    context.set_global("settings", settings)
    await broker.connect(settings.host)

Another example#

Now let's imagine that we have a machine learning model that needs to process messages from some broker.

Initialization of such models usually takes a long time. It would be wise to do this at the start of the application, and not when processing each message.

You can initialize your model somewhere at the top of your module/file. However, in this case, this code will be run even just in case of importing this module, for example, during testing.

Therefore, it is worth initializing the model in the @app.on_startup hook.

Also, we don't want the model to finish its work incorrectly when the application is stopped. To avoid this, we need to also define the @app.on_shutdown hook:

from faststream import Context, ContextRepo, FastStream
from faststream.kafka import KafkaBroker

broker = KafkaBroker("localhost:9092")
app = FastStream(broker)

ml_models = {}  # fake ML model


def fake_answer_to_everything_ml_model(x: float) -> float:
    return x * 42


@app.on_startup
async def setup_model(context: ContextRepo):
    # Load the ML model
    ml_models["answer_to_everything"] = fake_answer_to_everything_ml_model
    context.set_global("model", ml_models)


@app.on_shutdown
async def shutdown_model(model: dict = Context()):
    # Clean up the ML models and release the resources
    model.clear()


@broker.subscriber("test")
async def predict(x: float, model: dict = Context()):
    result = model["answer_to_everything"](x)
    return {"result": result}
from faststream import Context, ContextRepo, FastStream
from faststream.confluent import KafkaBroker

broker = KafkaBroker("localhost:9092")
app = FastStream(broker)

ml_models = {}  # fake ML model


def fake_answer_to_everything_ml_model(x: float) -> float:
    return x * 42


@app.on_startup
async def setup_model(context: ContextRepo):
    # Load the ML model
    ml_models["answer_to_everything"] = fake_answer_to_everything_ml_model
    context.set_global("model", ml_models)


@app.on_shutdown
async def shutdown_model(model: dict = Context()):
    # Clean up the ML models and release the resources
    model.clear()


@broker.subscriber("test")
async def predict(x: float, model: dict = Context()):
    result = model["answer_to_everything"](x)
    return {"result": result}
from faststream import Context, ContextRepo, FastStream
from faststream.rabbit import RabbitBroker

broker = RabbitBroker("amqp://guest:guest@localhost:5672/")
app = FastStream(broker)

ml_models = {}  # fake ML model


def fake_answer_to_everything_ml_model(x: float) -> float:
    return x * 42


@app.on_startup
async def setup_model(context: ContextRepo):
    # Load the ML model
    ml_models["answer_to_everything"] = fake_answer_to_everything_ml_model
    context.set_global("model", ml_models)


@app.on_shutdown
async def shutdown_model(model: dict = Context()):
    # Clean up the ML models and release the resources
    model.clear()


@broker.subscriber("test")
async def predict(x: float, model: dict = Context()):
    result = model["answer_to_everything"](x)
    return {"result": result}
from faststream import Context, ContextRepo, FastStream
from faststream.nats import NatsBroker

broker = NatsBroker("nats://localhost:4222")
app = FastStream(broker)

ml_models = {}  # fake ML model


def fake_answer_to_everything_ml_model(x: float) -> float:
    return x * 42


@app.on_startup
async def setup_model(context: ContextRepo):
    # Load the ML model
    ml_models["answer_to_everything"] = fake_answer_to_everything_ml_model
    context.set_global("model", ml_models)


@app.on_shutdown
async def shutdown_model(model: dict = Context()):
    # Clean up the ML models and release the resources
    model.clear()


@broker.subscriber("test")
async def predict(x: float, model: dict = Context()):
    result = model["answer_to_everything"](x)
    return {"result": result}
from faststream import Context, ContextRepo, FastStream
from faststream.redis import RedisBroker

broker = RedisBroker("redis://localhost:6379")
app = FastStream(broker)

ml_models = {}  # fake ML model


def fake_answer_to_everything_ml_model(x: float) -> float:
    return x * 42


@app.on_startup
async def setup_model(context: ContextRepo):
    # Load the ML model
    ml_models["answer_to_everything"] = fake_answer_to_everything_ml_model
    context.set_global("model", ml_models)


@app.on_shutdown
async def shutdown_model(model: dict = Context()):
    # Clean up the ML models and release the resources
    model.clear()


@broker.subscriber("test")
async def predict(x: float, model: dict = Context()):
    result = model["answer_to_everything"](x)
    return {"result": result}

Multiple hooks#

If you want to declare multiple lifecycle hooks, they will be used in the order they are registered:

from faststream import Context, ContextRepo, FastStream

app = FastStream()


@app.on_startup
async def setup(context: ContextRepo):
    context.set_global("field", 1)


@app.on_startup
async def setup_later(field: int = Context()):
    assert field == 1

Some more details#

Async or not async#

In the asynchronous version of the application, both asynchronous and synchronous methods can be used as hooks. In the synchronous version, only synchronous methods are available.

Command line arguments#

Command line arguments are available in all @app.on_startup hooks. To use them in other parts of the application, put them in the ContextRepo.

Broker initialization#

The @app.on_startup hooks are called BEFORE the broker is launched by the application. The @app.after_shutdown hooks are triggered AFTER stopping the broker.

If you want to perform some actions AFTER initializing the broker: send messages, initialize objects, etc., you should use the @app.after_startup hook.