Communicating through your API
The api.py
file defines the initial API endpoints exposed by your service.
The two default endpoints are:
GET /api
, returns a simple healthcheckGET /
, returns a simple HTML page
Emily APIs are built using FastAPI. Creating a new endpoint is simple - create a function as your normally would, receiving request data in the function parameters, and then decorate the function with the appropriate HTTP method decorator, supplying the endpoint route.
Each of the following decorators will create an API endpoint with the specified HTTP method reachable at http://localhost:4242/my-endpoint:
@app.get('/my-endpoint')
@app.put('/my-endpoint')
@app.post('/my-endpoint')
@app.patch('/my-endpoint')
@app.delete('/my-endpoint')
@app.options('/my-endpoint')
@app.head('/my-endpoint')
Path parameters
To create an endpoint taking a parameter in the route path, use the Python string templating syntax to define which part of the path contains the parameter. By explicitly specifying the type on the identically named function parameter, FastAPI will automatically extract and parse the parameter appropriately:
@app.get('/users/{id}') def get_user_by_id(id: int): user = database.users.where(id=id) return { "user": user }
The above endpoint can be called with http://localhost:4242/users/7.
If the id
provided cannot be parsed as an int
(see the id: int
type hint in the function signature), an HTTP 422 validation error will
be returned to the caller.
Query parameters
To receive query parameters (e.g., parameters provided at the end of the URL after ?
), add a function parameter as follows:
@app.get('/users/') def get_all_users(include_followers: bool = Query(False)): users = database.users if include_followers: for user in users: user.followers = database.followers.where(followed_user_id=user.id) return users
The include_followers
query parameter states:
include_followers: bool
, the parameter must be parsable as a boolean valueinclude_followers = Query(False)
, the parameter is a query parameter, and it's default value isFalse
The above endpoint can be called with http://localhost:4242/users/?include_followers=true.
An endpoint can contain arbitrarily many query parameters. When calling the endpoint, separate the query parameters by the &
symbol, e.g. http://localhost:4242/users/?include_followers=true&limit=100.
Request body
The POST
, PUT
, and PATCH
HTTP methods can all receive a request body - essentially a payload of data the endpoints needs to do its work.
By default, request bodies are accepted as application/json
, but other request body
content types - for example, receiving files - can also be specified.
FastAPI uses the Pydantic library to define the structure of expected JSON request bodies.
Imagine we need a POST /predict
endpoint for predicting the closing price of a stock market instrument given the open, high, and low price points for the day.
First, define the request body:
from pydantic import BaseModel class PredictClosingPriceRequest(BaseModel): ticker: string open: float high: float low: float
The above model specifies that the endpoint must receive a JSON request body with three fields open
, high
, and close
that can
all be parsed as floating point values, and a ticker symbol (e.g., AAPL
) that is just parsed as a string.
To receive this request body in an endpoint, simply add a function parameter and specify the type to the request model:
@app.post('/predict) def predict_closing_price(request: PredictClosingPriceRequest): logger.info(f'Predicting closing price for {request.ticker}...') prediction = neural_network.forward([request.open, request.high, request.low]) return { "close": prediction }
The above endpoint can be called by providing a properly formatted JSON object as the request data:
curl -X POST 'http://localhost:4242/predict' \ --header 'Content-Type: application/json' \ --data-raw '{ "ticker": "AAPL", "open": 19.1, "high": 25.312, "low": 18.2 }'
Returning Pydantic models
In addition to receiving Pydantic models as request objects, FastAPI also supports returning Pydantic models directly from API endpoints. This can be useful for making sure the output of an endpoint always follows the contract established between the client and the API.
Let's specify a response model for the above endpoint:
from pydantic import BaseModel class PredictClosingPriceResponse(BaseModel): close: float
Now, we can specify the response_model
argument in the endpoint decorator:
@app.post('/predict, response_model=PredictClosingPriceResponse) def predict_closing_price(request: PredictClosingPriceRequest): # -- snip -- return { "close": prediction }
This allows FastAPI to validate the outgoing response before sending it to the caller.
If our returned object contains fields not included in the reponse_model
, those fields will automatically be excluded from the response.
Additionally, if the reponse_model
contains fields that are not included in our returned object, Pydantic will through a validation error outright.
In general, it's typically easier and safer to construct the response model explicitly before returning. We'll be using the following endpoint specification in api.py
in the next sections:
from random import random from pydantic import BaseModel from loguru import logger ... class PredictClosingPriceRequest(BaseModel): ticker: string open: float high: float low: float class PredictClosingPriceResponse(BaseModel): close: float confidence: float @app.post('/predict, response_model=PredictClosingPriceResponse) def predict_closing_price(request: PredictClosingPriceRequest): logger.info(f'Predicting closing price for {request.ticker}...') # Pretend we're using a complicated AI model here... prediction = random() confidence = random() return PredictClosingPriceResponse( close=prediction, confidence=confidence )
Now that we have our /predict
endpoint, let's deploy it to an actual server in the next section: Deploy your API.