Model Deployment

Once a model has been trained, the easiest way to deploy is to use a Docker container for your target platform. All available docker images are available from https://hub.docker.com/u/jolibrain and include support for various GPUs, CPU, and embedded devices such as the Raspberry Pi.

Documentation on how to pull and start the Docker container of your choice is available from https://github.com/jolibrain/deepdetect/tree/master/docker.

Once the docker image is running, follow the steps below:

  • Download the files from your trained model from the predict page

Model download from predict page

Model download from model snippet

  • Set the files in a new model directory on your target platform

  • Look up the config.json file from the model directory, modify it accordingly to your new directory and load the model either from Python with the client or from the command line (see examples on the right)

  • Service creation example
curl -X PUT http://localhost:8080/services/myservice -d '{
    "description": "object detection service",
    "mllib": "caffe",
    "model": {
        "repository": "/opt/models/yourmodel/"
    },
    "parameters": {
        "input": {
            "connector": "image",
            "height": 300,
            "width": 300
        },
        "mllib": {
            "gpu": true,
            "nclasses": 10
        }
    },
    "type": "supervised"
}
'
  • Deploying from a tarball (try it!)
curl -X PUT http://localhost:8080/services/myservice -d '{
    "description": "object detection service",
    "mllib": "caffe",
    "model": {
        "repository": "/opt/models/detection_600/",
	"init":"https://deepdetect.com/models/init/desktop//img/platform/docs/detection/detection_600.tar.gz",
	"create_directory":true
    },
    "parameters": {
        "input": {
            "connector": "image",
            "height": 300,
            "width": 300
        },
        "mllib": {
            "gpu": true,
        }
    },
    "type": "supervised"
}
'
  • Use the model from Python, Javascript or other, including the command line, the code snippets can be copied from the prediction user interface. This allows you to verify that your results are exactly those obtained while on the platform.

Model prediction using code snippets from the platform UI

Shell code

Related