AWS AMI

Overview

AMI runs the latest DeepDetect server with Caffe, XGBoost and the similarity search / recommendation engine built-in.

Features

  • The AMI comes ready to use, is automatically updated to the last version of the DeepDetect Server
  • Comes with a REST API, JSON input / output and a range of clients (Python, Go, Javascript, …)
  • A range of neural network models for image and text processing ready to be used for inference with a single call
  • Ready for both training and inference of new models for images, audio, text, time-series and tabular data

Quickstart

  • Launch the GPU AMI

  • DeepDetect server is listening on http://<yourpublicip>:8080/info and http://localhost:8080/info from the command line from inside the AMI (i.e. if you have logged into your AMI with ssh)

More information:

Check that DeepDetect is running correctly

  • Try an info call:

From outside your AMI:


curl -X GET 'http://yourpublicip:8080/info'

Output should look like:


{
  "status": {
    "code": 200,
    "msg": "OK"
  },
  "head": {
    "method": "/info",
    "version": "0.1",
    "branch": "master",
    "commit":"c8556f0b3e7d970bcd9861b910f9eae87cfd4b0c",
    "services": []
  }
}

Note: commit may be different

  • Check on the server logs, explanations are below

Usage

Here is how to do a simple image classification service and prediction test:

Service creation


curl -X PUT 'http://localhost:8080/services/ilsvrc_googlenet' -d '{
    "description": "image classification service",
    "mllib": "caffe",
    "model": {
        "init": "https://deepdetect.com/models/init/desktop/images/classification/ilsvrc_googlenet.tar.gz",
        "repository": "/opt/model/ilsvrc_googlenet"
    },
    "parameters": {
        "input": {
            "connector": "image"
        }
    },
    "type": "supervised"
}
'

should yield:


{
  "status":{
    "code":201,
    "msg":"Created"
  }
}

Image classification


curl -X POST "http://localhost:8080/predict" -d '{
  "service":"imageserv",
  "parameters":{
    "input":{},
    "output":{
      "best":3
    },
    "mllib":{
      "gpu":true
    }
  },
  "data":[
    "http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg"
  ]
}'

should yield:


{
  "status":{
    "code":200,
    "msg":"OK"
  },
  "head":{
    "method":"/predict",
    "time":852.0,
    "service":"imageserv"
  },
  "body":{
    "predictions":{
      "uri":"http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg",
      "classes":[
        {
          "prob":0.2255125343799591,
          "cat":"n03868863 oxygen mask"
        },
        {
          "prob":0.20917612314224244,
          "cat":"n03127747 crash helmet"
        },
        {
          "last":true,
          "prob":0.07399296760559082,
          "cat":"n03379051 football helmet"
        }
      ]
    }
  }
}

API clients

The recommended API clients are:

Version

Since the DeepDetect AMI version 1.4 (latest), the DeepDetect Server is updated automatically at startup.

Specifications

  • Ubuntu 18.04
  • Cuda 10 with CuDNN 7.1
  • OpenBlas
  • Caffe with custom improvements
  • XGBoost latest
  • DeepDetect latest

Server Logs

Server logs are accessible at /var/log/deepdetect.log.

Typical log at AMI startup should look like:


DeepDetect [ commit f7d27d73005db2832ef445153e42b5641104ff4f ]
Running DeepDetect HTTP server on :8080

In case of difficulties, please report the server logs along with your request.

SSH Access to AMI

To get started, launch an AWS instances using this AMI from the EC2 Console. If you are not familiar with this process please review the AWS documentation provided here:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html

Accessing the instance via SSH:


ssh -i  ubuntu@{ EC2 Instance Public IP }

From there you can reach the server on localhost:8080, with an info call for instance:


curl -X GET 'http://localhost:8080/info'

Issues

It is recommended to also look at the list of currently known issues. If nothing is relevant, you can try to search the closed issues as well at https://github.com/jolibrain/deepdetect.

Anyways, for any issue, you can contact support.

Known issues

  • After a reboot, the DeepDetect server is not coming back up ? The auto-update may take some time, along with the Ubuntu security updates. Wait at least five to ten minutes. If the DeepDetect server is still not getting back up, ssh into the AMI, and run sudo docker ps. If nothing shows, run top and see whether some docker processes are among the top ones, meaning the update is still under way. If it is, wait until it has finished.

  • After a reboot, the server is still not coming back up ? This is most likely due to Ubuntu auto-updates that change the kernel for a new one, without the required NVidia driver for the EC2 GPU instance. One known solution is to log onto your instance with ssh and do:


nvidia-smi

This should tell you that the current kernel does not have the required driver. Remove the kernel with:


sudo aptitude remove linux-image-4.4.0-97-generic

(change the kernel version according to nvidia-smi output).

  • The g2.2xlarge EC2 GPU instances do not appear to bear enough GPU memory for using resnet_50 and above. Try p2.2xlarge instead.

Server Crash ? DeepDetect server is robust to errors. Since it is Open Source, it has been tested under heavy load by us and customers alike.

Some situations remain from which the server cannot recover, typically:

  • when machine runs out of memory (e.g. neural net is too large for RAM or GPU VRAM)
  • when the underlying deep learning library (e.g. Caffe or Tensorflow) cannot itself recover from a memory or compute error

Note: the server automatically restarts after any unrecoverable failure.

In all cases, if you experience what you believe is a server crash, always contact support.

Free Trial

The AMI do not offer free trial since our Docker builds are available for free for both CPU and GPU.

Another way to test the product is to build it from sources, see https://github.com/jolibrain/deepdetect.

Support

Email your requests to ami@deepdetect.com

Please allow 24hrs or use the gitter live chat for faster response.