API Endpoints Reference¶
API documentation for the FastAPI inference server.
API Module¶
mlops_project.api
¶
PredictionResponse
¶
Bases: BaseModel
Response model for predictions.
load_config()
¶
Load Hydra config without changing working directory.
pull_models_dvc()
¶
Pull models from DVC remote if models directory is empty or missing.
find_latest_model(model_name='EfficientNet')
¶
Find the latest ONNX model file for a given model name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
Name of the model (e.g., "EfficientNet", "ResNet") |
'EfficientNet'
|
Returns:
| Type | Description |
|---|---|
Path | None
|
Path to the latest model file, or None if not found |
lifespan(app)
async
¶
Load model on startup and cleanup on shutdown.
read_root()
¶
Root endpoint.
health_check()
¶
Health check endpoint.
perform_inference(file=File(...))
async
¶
Perform inference on an uploaded image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file
|
UploadFile
|
Image file (JPG, PNG, etc.) |
File(...)
|
Returns:
| Type | Description |
|---|---|
|
Dictionary with predicted class, diagnosis name, and probabilities |
Raises:
| Type | Description |
|---|---|
HTTPException
|
If model is not loaded or inference fails |
Response Models¶
PredictionResponse¶
mlops_project.api.PredictionResponse
¶
Bases: BaseModel
Response model for predictions.
Endpoints¶
GET /¶
Root endpoint returning welcome message.
GET /health¶
Health check endpoint.
Response:
POST /predict¶
Classify a skin lesion image.
Request: multipart/form-data with image file
Example:
Response: