Reflex is a recent (but exciting!) entrant to the world of Python web frameworks. Fly.io is a hosting provider that lets you host your applications in production really quickly. In this blog we'll have a look at how you can deploy your Reflex applications on Fly.io.
Background
Before looking at how to deploy Reflex apps to Fly, we need to understand how to deploy a Reflex app on any platform. In our previous blog post we described how to run Reflex apps in production which also included the general setup of what a Reflex app looks like and what are the underlying deployment artifacts you need to account for.
If you haven't read that post, please read it now and come back to this post after you've read it. In the meanwhile, we'll prepare a coffee for ourselves and wait for you to come back.
Pre-requisite
Now that the background is out of the way, let's focus on putting Reflex apps on Fly.io
In this blog post we'll assume that you have flyctl
installed on your machine. If
not, please follow their installation instructions to do so. We'll also, of course,
assume that you have a deployment-ready Reflex application.
The first step is to authenticate yourself using the CLI. Run fly auth login
on the
command line. Running this command would open up a browser where you can enter your
login credentials, and redirect you back to the terminal.
~/W/p/g/code main ❯ fly auth login
Opening https://fly.io/app/auth/cli/abcc63c7f3aa261e573cc802a3695xyz ...
Waiting for session... Done
successfully logged in as [email protected]
Deployment Artifact(s)
Fly uses Dockerfiles to run applications. Internally, they convert these Dockerfiles into micro VMs to finally run the app in production, but all that is not relevant to this blog post (although certainly very interesting!).
This means your application needs a Dockerfile to run on Fly. As we saw in the previous blog post, to run a Reflex app in production, we want to be able to run a FastAPI backend and a Next.js frontend. One approach to do this is to run the backend and frontend on different ports and put an Nginx server in front of them both. Nginx will intercept all HTTP requests, and based on what the request is for, relay it to either the backend or frontend. Additionally, to keep the three processes running, we can use an off-the-shelf process manager. Supervisord is one such popular tool, which we also mentioned in the previous blog post, so let's use that.
All this means that the architecture looks as follows:
- FastAPI server runs on port 8000 and serves backend requests.
- Next.js server runs on port 3000 and serves frontend requests.
- Nginx runs on port 80, intercepting all HTTP requests and relaying them to either FastAPI or Next.js.
- Supervisord runs as the process manager and keeps all three processes running.
Amongst these four components, FastAPI and Next.js processes are handled by Reflex
out of the box (using reflex run --backend-only
and reflex run --frontend-only
).
Let's look at how to how the Nginx and Supervisord configuration files look like, and
close the loop with a Dockerfile that connects everything.
1. Nginx configuration
In the previous blog post, we showed what an example Nginx configuration file can look like, along with a description of how it works. Here's the code snippet again for easy reference:
server {
# ...
# backend
location ~ ^/(admin|ping|upload) {
proxy_pass http://0.0.0.0:8000;
# ...
}
# websocket
location ~ /_event/ {
proxy_pass http://0.0.0.0:8000;
proxy_set_header Connection "Upgrade";
proxy_set_header Upgrade $http_upgrade;
# ...
}
# frontend
location / {
proxy_pass http://0.0.0.0:3000;
# ...
}
}
The main thing to note is that we have location
blocks configured for the backend, the
frontend, and the websocket requests used internally by Reflex.
2. Supervisord configuration
As with Nginx, the previous blog post also showed what an example Supervisord configuration file can look like, along with a quick description of how it works. We'll repeat the code snippet here for easy reference.
[supervisord]
nodaemon=true
[program:backend]
directory=/app
command=poetry run reflex run --env prod --backend-only --backend-port 8000
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
[program:frontend]
directory=/app
command=poetry run reflex run --env prod --frontend-only --frontend-port 3000
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
[program:nginx]
directory=/app
command=nginx -c /app/nginx.conf
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
In this snippet we're asking Supervisord to run the backend, frontend, and Nginx processes, and keep them running in case they stop for some reason.
3. Dockerfile
The final piece of the puzzle is a Dockerfile that Fly can work with. Here's an example of what it could look like:
FROM ubuntu:22.04
# Install dependencies
RUN apt-get update
RUN apt-get install --yes build-essential curl nginx supervisor unzip
RUN curl -fsSL https://deb.nodesource.com/setup_19.x | bash -
RUN apt-get install --yes build-essential python3 python3-dev python3-venv nodejs
RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
# Set the main working directory
WORKDIR /app
# Set up a virtual environment to install all Python packages
ENV VIRTUAL_ENV=/app/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH "$VIRTUAL_ENV/bin:$PATH"
RUN pip install poetry
# Copy all the source code to the main working directory
COPY pyproject.toml poetry.lock rxconfig.py /app
COPY app /app/app
COPY assets /app/assets
COPY supervisord.conf /app/supervisord.conf
COPY nginx.conf /app/nginx.conf
# Install dependencies
RUN poetry install
RUN poetry run reflex init
CMD ["/usr/bin/supervisord", "-c", "/app/supervisor.conf"]
EXPOSE 80
We've added inline comments to describe what's going on in each section of the Dockerfile.
Essentially, we start from Ubuntu 22.04 as the base image and install the core
dependencies we'll need (eg. Python, Nginx, Supervisor, etc.). We then set the main
working directory and set up a virtual environment where all the Python dependencies
will be installed. Next, we copy our application source code to the main working
directory, which is followed by installing the Python dependencies and running reflex init
. Finally, HTTP port 80 is exposed to the outside world for Nginx to be able to
receive and respond to requests.
Deployment Process
Now that we have all the deployment artifacts in place, let's move to the deployment itself!
1. Launch
In the terminal, run fly launch
. It will ask you a few questions, such as what the
name of your application should be, what region it should be launched in, etc. And after
those are out of the way, it'll get to work to put your Reflex application into
production.
~/W/p/g/code main ❯ fly launch
Scanning source code
Detected a Dockerfile app
Creating app in /tmp/example
We're about to launch your app on Fly.io. Here's what you're getting:
Organization: Personal (fly launch defaults to the personal org)
Name: example-2023-12-09 (derived from your directory name)
Region: Frankfurt, Germany (this is the fastest region for you)
App Machines: shared-cpu-1x, 1GB RAM (most apps need about 1GB of RAM)
Postgres: <none> (not requested)
Redis: <none> (not requested)
? Do you want to tweak these settings before proceeding? No
Waiting for launch data... Done
Created app 'example-2023-12-09' in organization 'personal'
Admin URL: https://fly.io/apps/geniepy-example-2023-12-09
Hostname: example-2023-12-09.fly.dev
Wrote config file fly.toml
Validating /path/to/example/fly.toml
Platform: machines
✓ Configuration is valid
... output snipped ...
We've removed some of the output lines in the previous snippet to remove sensitive information. But towards the end you'll receive a message from Fly.io that your application is ready!
2. Configure
Once the application is launched, you might need to set a few environment variables that
it needs. Set them using flyctl secrets set
.
$ flyctl secrets set API_KEY=example ...
3. Deploy
At this point your app should be ready for deployment! Run flyctl deploy
to do that.
$ flyctl deploy
You'll see a lot of lines go by that tell you what the deployment process is currently up to, including building and pushing the Docker image and updating any existing machines. At the end, if everything went well, you'll see the following message on the console:
Visit your newly deployed app at https://example.fly.dev/
This means that your application is now deployed to the public internet and is ready to accept user requests. Go ahead and open the link in your browser and find it out for yourself!