Find models that you need, for educational purposes, transfer learning, or other uses. I've tested with OpenCV 3. Dostávejte push notifikace o všech nových článcích na mobilenet. Mobilenet full architecture. After retraining on several model architectures, let's see how they compare. Creating an Object Detection Application Using TensorFlow This tutorial describes how to install and run an object detection application. ModelZoo curates and provides a platform for deep learning researchers to easily find code and pre-trained models for a variety of platforms and uses. application_mobilenet() and mobilenet_load_model_hdf5() return a Keras model instance. md] MobileNet_v1. Finally, the width and resolution can be tuned to trade off between latency and accuracy. The MobileNet v2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input. predict(processed_image_mobilenet) label_mobilenet = decode_predictions(predictions_mobilenet) print ('label_mobilenet = ', label_mobilenet) Summary: We can use pre-trained models as a starting point for our training process, instead of training our own model from scratch. It demonstrates how to use mostly python code to optimize a caffe model and run inferencing with TensorRT. js, a library built on top of TensorFlow. I don't believe there's a direct mapping between the image and the closest 3D model in database. Both are competitive model architectures for image recognition on mobile devices, though MobileNet generally identifies more objects from the ImageNet database accurately. Benchmarking results in milli-seconds for MobileNet v1 SSD 0. mobilenet_v2_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in. txt; Investigating model. It can be found in the Tensorflow object detection zoo, where you can download the model and the configuration files. Mobilenet uses depthwise convolution and combines it with the output of 1×1 pointwise convolution. But when I try to use the model again with load_model_hdf5, …. Width Multiplier α for Thinner Models. mobilenet_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in batch input). It can also provide top-5 category predictions out of 1000 classes on ImageNet. 3 Million, because of the fc layer. We will use 224 0. There are also many flavours of pre-trained models with the size of the network in memory and on disk being proportional to the number of parameters being used. application_mobilenet() and mobilenet_load_model_hdf5() return a Keras model instance. The advantages and shortcomings of the SSD and MobileNet-SSD framework were analyzed using fifty-nine individual traffic cameras. Retraining the model. mobilenet_v2_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in. The MobileNet v2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input. Width Multiplier α for Thinner Models. Boolean value that specifies if the model has loaded. Table 7 shows that ShuffleNet 2× is superior over MobileNet by a significant margin on both resolutions; ShuffleNet 1× also has comparable results with MobileNet on 600× resolution, but ~4× complexity reduction. The size of the network in memory and on disk is proportional to the number of parameters. mobilenet_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in batch input). Line 25 instantiates the MobileNet model. To develop this model, the car dataset from Stanford was used which contains 16,185 images of 196 classes of cars. The main thing that makes it stand out is the use of depth-wise separable (DW-S) convolution. However, with single shot detection, you gain speed but lose accuracy. You can run this demo using either the SqueezeNet model or Google's MobileNet model. As for the model, I've tried out SSD_Mobilenet v1, SSD_Mobilenet v2, SSDLite Mobilenet all available in the Tensorflow's Object Detection Model Zoo GitHub page. 更新:考虑到Mobilenet特征提取能力有限,最近试验将分辨率提升至416*416(速度降低很少),然后使用仅含4类目标(通过脚本提取)的COCO预训练模型,初始学习率为0. The image segmentation model we currently host is deeplabv3_257_mv_gpu. ; Extract and store features from the last fully connected layers (or intermediate layers) of a pre-trained Deep Neural Net (CNN) using extract_features. Labels for the Mobilenet v2 SSD model trained with the COCO (2018/03/29) dataset. TensorFlow的nn库有depthwise convolution算子tf. And most important, MobileNet is pre-trained with ImageNet dataset. # mobilenet predictions_mobilenet = mobilenet_model. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. mobilenet_preprocess_input() returns image input suitable for feeding into a mobilenet model. 5 for this codelab. A trained model has two parts - Model Architecture and Model Weights. mobilenet_v2_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in. We recommend starting with this pre-trained quantized COCO SSD MobileNet v1 model. 5 FPS on the NCS. It means that the number of final model parameters should be larger than 3. Models available are: 'MobileNet', 'Darknet' and 'Darknet-tiny', or any image classifiation model trained in Teachable Machine; callback - Optional. I am using ssd_mobilenet_v1_coco for demonstration purpose. Line 25 instantiates the MobileNet model. This convolutional model has a trade-off between latency and accuracy. md] MobileNet_v1. Q&A for Work. MobileNet (research paper), MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks, suitable for mobile applications. Hi, I am using the mobilenet model application_mobilenet to create a personal model that I have retrained. For complete evaluation results, please refer to here. md **Caffe Version** Converted from a Caffe version of the original MobileNet. Hi @AngelZheng , Thanks for reply. We refer such model as a pre-trained model. Model configuration is stored in GtsrbModelConfig. The default input size for this model is 224x224. The application uses TensorFlow and other public API libraries to detect multiple objects in an uploaded image. Creating an Object Detection Application Using TensorFlow This tutorial describes how to install and run an object detection application. It can also provide top-5 category predictions out of 1000 classes on ImageNet. To do this, we need the Images, matching TFRecords for the training and testing data, and then we need to setup the. The PriorBox layer is folded by the converter (for model/performance optimization reasons). The size of the network in memory and on disk is proportional to the number of parameters. model = load_model('mobilenet. But thanks to transfer learning where a model trained on one task can be applied to other tasks. The base model will have the same weights from imagenet. To be able to do that we need 2 things:. Download model. It can also provide top-5 category predictions out of 1000 classes on ImageNet. Hi, I am using the mobilenet model application_mobilenet to create a personal model that I have retrained. We picked model with input image size of 224x224 px. We will use 224 0. g, MobileNet, SqueezeNet etc. mobilenet_v2_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in. In the mean time you can use the model from hollance/MobileNet-CoreML on GitHub (not posting the link because then this comment takes a week to be moderated) which was converted from Caffe. 2019-05-20 update: I just added the Running TensorRT Optimized GoogLeNet on Jetson Nano post. The operation 'do_reshape_conf' takes ~90% of the total inference time. Hence, width multiplier works on reducing the activation space until the whole space is spanned by a. Find models that you need, for educational purposes, transfer learning, or other uses. The model was trained using pretrained VGG16, VGG19 and InceptionV3 models. Consequently, PriorBox layer will not be written into DLC file, hence it will not be listed in DLC info for the model. About the MobileNet model size; According to the paper, MobileNet has 3. Check out the updated GitHub repo for the source code. Mobilenet was mainly developed to work for embedded vision application. Width Multiplier α for Thinner Models. #71 best model for Image Classification on ImageNet (Top 1 Accuracy metric) MobileNet-224 Top 1 Accuracy 70. Note that this model only supports the data format 'channels_last' (height, width, channels). Download model. The model extends the concept of width multiplier introduced in MobileNet-V1, which deals with the possibility to represent manifold of interest of a pixel covered through 'd' dimensions (d = number of channels) in less dimensional space. The network is 54 layers deep and can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. There's a trade off between detection speed and accuracy, higher the speed lower the accuracy and vice versa. These classes include make, model, year, e. I've trained a model with a custom dataset (Garfield images) with Tensorflow Object Detection API (ssd_mobilenet_v1 model) and referring it in the android sample application available on Tensorflow repository. js, a library built on top of TensorFlow. Depthwise Separable Convolution. In this section, we present some of our results for applying various model compression methods for ResNet and MobileNet models on the ImageNet classification task, including channel pruning, weight sparsification, and uniform quantization. In my case, I will download ssd_mobilenet_v1_coco. tflite model. Training took 18 minutes. In our example, I have chosen the MobileNet V2 model because it's faster to train and small in size. Hence, width multiplier works on reducing the activation space until the whole space is spanned by a. Object detection (trained on COCO): mobilenet_ssd_v2 / - MobileNet V2 Single Shot Detector (SSD). This DepthwiseConv2D layer is a very recent addition to Keras. 75 depth model and the MobileNet v2 SSD model, both trained using the Common Objects in Context (COCO) dataset with an input size of 300×300, for the Raspberry Pi 3, Model B+ (left), and the new Raspberry Pi 4, Model B (right). It is also very low maintenance thus performing quite well with high speed. mobilenet_preprocess_input() returns image input suitable for feeding into a mobilenet model. You should be seeing a live stream from camera and if you open Serial Terminal you will the top image recognition result with the confidence score!. The network structure is another factor to boost the performance. Here is my training config model {ssd {num_classes: 6 image_resizer {fixed_shape_resizer {height: 300 width: 300. Smart reply. As for the model, I've tried out SSD_Mobilenet v1, SSD_Mobilenet v2, SSDLite Mobilenet all available in the Tensorflow's Object Detection Model Zoo GitHub page. The MobileNet v2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input. predict(processed_image_mobilenet) label_mobilenet = decode_predictions(predictions_mobilenet) print ('label_mobilenet = ', label_mobilenet) Summary: We can use pre-trained models as a starting point for our training process, instead of training our own model from scratch. It also provides high-quality features as well. Lines 36-38 converts keras mobilenet model into tf. A similar speed benchmark is carried out and Jetson Nano has achieved 11. 5 version of MobileNet. In other words, a model trained on one task can be adjusted or finetune to work for another task without explicitly training a new model from scratch. In this section, the shuffleNet model is examined on the task of MS COCO object detection. It means that the number of final model parameters should be larger than 3. txt; Investigating model. The operation 'do_reshape_conf' takes ~90% of the total inference time. Recently, two well-known object detection models are YOLO and SSD, however both cost too much computation for devices such as raspberry pi. Dostávejte push notifikace o všech nových článcích na mobilenet. 5, The MobileNet model is only available for TensorFlow, due to its reliance on DepthwiseConvolution layers. We will also initialize the base model with a matching input size as to the pre-processed image data we have which is 160×160. hasAnyTrainedClass. The following are the Properties and Methods when MobileNet is selected as the model from which to extract the Features: ml5. From the work we did together in the last video, we. But thanks to transfer learning where a model trained on one task can be applied to other tasks. The pose estimation model we currently host is multi_person_mobilenet_v1_075_float. The network is 54 layers deep and can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. pb file to the OpenVINO-friendly files I used:. Fur-thermore, this convolutional module is particularly suit-. To begin, we're going to modify the notebook first by converting it to a. pb` downloaded from Colab after training. It demonstrates how to use mostly python code to optimize a caffe model and run inferencing with TensorRT. The default input size for this model is 224x224. js, to load the MobileNet model into our browser and perform inference on the video feed; We also leveraged the P5. js: Using a pretrained MobileNet. Download model. The model was trained using pretrained VGG16, VGG19 and InceptionV3 models. We will use 224 0. Hi, I'm Swastik Somani, a machine learning enthusiast. This network is one of the fastest models that also provide high accuracy. Real time reporting and sharing of quality information go beyond the confines of a mere database to a true knowledge base. In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. It is a general technique that reduces the numerical precision of the weights and activations of models to reduce memory and improve latency. 3 Million, because of the fc layer. Thanks, Anand C U. To convert from the. In the above example, we used a pre-trained image classification model called MobileNet; We used ml5. Open mobilenet. mobilenet_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in batch input). The Dish Classifier model is designed to identify food in an image. The model was trained using pretrained VGG16, VGG19 and InceptionV3 models. modelLoaded. Real time reporting and sharing of quality information go beyond the confines of a mere database to a true knowledge base. With the examples in SNPE SDK, I have modified and tested SNPE w/ MobileNet and Inception v1 successfully. The network is 54 layers deep and can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. The network structure is another factor to boost the performance. mobilenet_v2_preprocess_input() returns image input suitable for feeding into a mobilenet v2 model. This demo uses the pretrained MobileNet_25_224 model from Keras which you can find here. You can experiment further by switching between variants of MobileNet. Conclusion and further reading. We'll be using MobileNet-SSD v2 for our object detection model, as it's more popular—let's download its weights and config. The image segmentation model we currently host is deeplabv3_257_mv_gpu. These networks are trained for classifying images into one of 1000 categories or classes. def mobilenet1_0 (** kwargs): r """MobileNet model from the `"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" … > Select IoT Hub and then AZURE IOT HUB DEVICES > … > Start Monitoring D2C Message command to monitor the messages sent from the AI Vision. Mobilenet_v2. Today I will share you how to create a face recognition model using TensorFlow pre-trained model and OpenCv used to detect the face. MobileNet-v2 is a convolutional neural network that is trained on more than a million images from the ImageNet database. I've trained a model with a custom dataset (Garfield images) with Tensorflow Object Detection API (ssd_mobilenet_v1 model) and referring it in the android sample application available on Tensorflow repository. Integer quantization is a new addition to the TensorFlow Model Optimization Toolkit. 75 depth model and the MobileNet v2 SSD model, both trained using the Common Objects in Context (COCO) dataset with an input size of 300×300, for the Raspberry Pi 3, Model B+ (left), and the new Raspberry Pi 4, Model B (right). maxujian3893:您好 能看下您的数据集吗 不知道您的数据集的标签 还有数据集是怎么存放的. From the work we did together in the last video, we. Download: Tensorflow models repo、Raccoon detector dataset repo、 Tensorflow object detection pre-trained model (here we use ssd_mobilenet_v1_coco)、 protoc-3. Welcome to part 2 of the TensorFlow Object Detection API tutorial. 5 simple steps for Deep Learning. The network is 54 layers deep and can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. Adapting to the operating model of the various service providers, MobileNet continues to deliver RF engineering expertise to major wireless operators, equipment OEMs, and infrastructure companies including Verizon Wireless, AT&T Wireless, Sprint, Ericsson, Nokia, Crown Castle, American Tower, ExteNet and Mobilitie. These classes include make, model, year, e. Recently, two well-known object detection models are YOLO and SSD, however both cost too much computation for devices such as raspberry pi. I don't believe there's a direct mapping between the image and the closest 3D model in database. featureExtractor("MobileNet"); Properties. With the recommended settings, it typically takes only a couple of minutes to retrain on a laptop. Note that this model only supports the data format 'channels_last' (height, width, channels). I think he's using MobileNet to label the object in the video and then uses that label to search up the 3D model (which I assume have labeling of their own). depthwise_conv2d,所以MobileNet很容易在TensorFlow上实现:. The base model is the model that is pre-trained. 54 FPS with the SSD MobileNet V1 model and 300 x 300 input image. After that, I saved the model with save_model_hdf5. It is not trained to recognize human faces. To convert from the. With the examples in SNPE SDK, I have modified and tested SNPE w/ MobileNet and Inception v1 successfully. Note: The MobileNet paper actually claims accuracy of 70. Let's train our fine-tuned MobileNet model on images from our own data set, and then evaluate the model by using it to predict on unseen images. Gender Model This model uses the IMDB WIKI dataset, which contains 500k+ celebrity faces. After that, I saved the model with save_model_hdf5. To be able to do that we need 2 things:. mobilenet_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in batch input. Download the pre-trained model from the above link. To train the model in Caffe, follow instructions at Caffe MobilenetSSD. Copy labels. Hi @AngelZheng , Thanks for reply. model = load_model('mobilenet. Recently, two well-known object detection models are YOLO and SSD, however both cost too much computation for devices such as raspberry pi. The image segmentation model we currently host is deeplabv3_257_mv_gpu. kmodel to the root of an SD card and insert SD card into Sipeed Maix Bit. The object detection model we provide can identify and locate up to 10 objects in an image. model = load_model('mobilenet. We recommend starting with this pre-trained quantized COCO SSD MobileNet v1 model. 3 Million Parameters, which does not vary based on the input resolution. Now you could train the entire SSD MobileNet model on your own data from scratch. GitHub Gist: instantly share code, notes, and snippets. load_modelからMobileNetモデルをロードするには,カスタムオブジェクトのrelu6をインポートし,custom_objectsパラメータに渡してください. 例. The latency and power usage of the network scales with the number of Multiply-Accumulates (MACs) which measures the number of fused Multiplication and Addition operations. To develop this model, the car dataset from Stanford was used which contains 16,185 images of 196 classes of cars. application_mobilenet() and mobilenet_load_model_hdf5() return a Keras model instance. This network is one of the fastest models that also provide high accuracy. Therefore we can take SSD-MobileNet into consideration. The size of the network in memory and on disk is proportional to the number of parameters. Line 28 makes predictions on the test image using MobileNet model. I'm not sure if these results are on the ImageNet test set or the validation set, or exactly which part of the images they tested the model on. Open OpenMV IDE and press the connect button. Training took 18 minutes. One of the more used models for computer vision in light environments is Mobilenet. The network is 54 layers deep and can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. batch_size - batch sizes for training (train) and validation (val) stages. Image classification using MobileNet. predict(processed_image_mobilenet) label_mobilenet = decode_predictions(predictions_mobilenet) print ('label_mobilenet = ', label_mobilenet) Summary: We can use pre-trained models as a starting point for our training process, instead of training our own model from scratch. To be able to do that we need 2 things:. 6% versus 71. The base model will have the same weights from imagenet. MobileNet Model The backbone of our system is MobileNet, a novel deep NN model proposed by Google, designed specifically for mobile vision applications. In my case, I will download ssd_mobilenet_v1_coco. 0_128 as the base model increases the model size to 17MB but also increases accuracy to 80. After deciding the model to be used download the config file for the same model. In our example, I have chosen the MobileNet V2 model because it's faster to train and small in size. You should be seeing a live stream from camera and if you open Serial Terminal you will the top image recognition result with the confidence score!. maxujian3893:您好 能看下您的数据集吗 不知道您的数据集的标签 还有数据集是怎么存放的. 75 depth model and the MobileNet v2 SSD model, both trained using the Common Objects in Context (COCO) dataset with an input size of 300×300, for the Raspberry Pi 3, Model B+ (left), and the new Raspberry Pi 4, Model B (right). Hi, I am using the mobilenet model application_mobilenet to create a personal model that I have retrained. Fur-thermore, this convolutional module is particularly suit-. I've trained a model with a custom dataset (Garfield images) with Tensorflow Object Detection API (ssd_mobilenet_v1 model) and referring it in the android sample application available on Tensorflow repository. In this section, we present some of our results for applying various model compression methods for ResNet and MobileNet models on the ImageNet classification task, including channel pruning, weight sparsification, and uniform quantization. # set a scale factor to image because network the objects has differents size. There are also many flavours of pre-trained models with the size of the network in memory and on disk being proportional to the number of parameters being used. tflite and labels_mobilenet. The operation 'do_reshape_conf' takes ~90% of the total inference time. Open mobilenet. Image segmentation. One of the more used models for computer vision in light environments is Mobilenet. 001,根据损失值和精度调整后续学习率,迭代50000次后,目前精度提升到62. We'll be using MobileNet-SSD v2 for our object detection model, as it's more popular—let's download its weights and config. pb file to the OpenVINO-friendly files I used:. 0, and Python 3. For best performance, upload images of objects like piano, coffee mugs, bottles, etc. In this section, the shuffleNet model is examined on the task of MS COCO object detection. The model extends the concept of width multiplier introduced in MobileNet-V1, which deals with the possibility to represent manifold of interest of a pixel covered through 'd' dimensions (d = number of channels) in less dimensional space. Future works Speed (fps) Accuracy(mAP) Model Size full-Yolo OOM 0. The network structure is another factor to boost the performance. The network is 54 layers deep and can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. mobilenet_v2_decode_predictions() returns a list of data frames with variables class_name, class_description, and score (one data frame per sample in. Mobilenet full architecture. The Gluon Model Zoo API, defined in the gluon. Depending on your computer, you may have to lower the batch size in the config file if you run out of memory. MobileNet-YOLOv3来了(含三种框架开源代码) 前戏. 使用SSD-MobileNet训练模型. ResNet-18 (research paper), the -152 version is the 2015 winner in multiple categories. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. It can be found in the Tensorflow object detection zoo, where you can download the model and the configuration files. maxujian3893:您好 能看下您的数据集吗 不知道您的数据集的标签 还有数据集是怎么存放的. The image segmentation model we currently host is deeplabv3_257_mv_gpu. However, with single shot detection, you gain speed but lose accuracy. Consequently, PriorBox layer will not be written into DLC file, hence it will not be listed in DLC info for the model. To train the model in Caffe, follow instructions at Caffe MobilenetSSD. MobileNet-v2 is a convolutional neural network that is trained on more than a million images from the ImageNet database. First we should finish our model, last step we get the mobilenet model without "top", let's add the top:. mobilenet_preprocess_input() returns image input suitable for feeding into a mobilenet model. Do you want to use image recognition in your mobile app? To deploy machine learning models to your phone and get fast predictions, the model size is key. 5% for VGG16 and 69. But when I try to use the model again with load_model_hdf5, …. mobilenet_v2 / - MobileNet V2 classifier. Upozornění na nové články. All the 3 models have the same issue. For more information about smart reply, see Smart reply. It's based on the MobileNet model architecture and trained to recognize over 2,000 types of food. Line 31 gives the top-5 predictions of the test image. This DepthwiseConv2D layer is a very recent addition to Keras. Welcome to part 2 of the TensorFlow Object Detection API tutorial. In this part of the tutorial, we will train our object detection model to detect our custom object. 参考 https://github. 5 simple steps for Deep Learning. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. Note that this model only supports the data format 'channels_last' (height, width, channels). Learn more about Teams. 该模型仅在TensorFlow后端均可使用,因此仅channels_last维度顺序可用。当需要以load_model()加载MobileNet. In our example, I have chosen the MobileNet V2 model because it's faster to train and small in size. Let's train our fine-tuned MobileNet model on images from our own data set, and then evaluate the model by using it to predict on unseen images. md] MobileNet_v1. MobileNet only got 1% loss in accuracy, but the Mult-Adds and parameters are reduced tremendously. js: Using a pretrained MobileNet. Conclusion and further reading. The MobileNet model labeled this as with a confidence of with a confidence of. Replace original mobilenet. g, MobileNet, SqueezeNet etc. Mobilenet_v2. This module can be efficiently implemented using standard operations in any modern framework and al-lows our models to beat state of the art along multiple performance points using standard benchmarks. For Keras < 2. We noted, however, that many types of cat and dog breeds were included in the. It can also provide top-5 category predictions out of 1000 classes on ImageNet. MobileNet is an architecture which is more suitable for mobile and embedded based vision applications where there is lack of compute power. Download starter model and labels. This architecture was proposed by Google. However, with single shot detection, you gain speed but lose accuracy. About the MobileNet model size; According to the paper, MobileNet has 3. Prepare the training dataset with flower images and its corresponding labels. Find models that you need, for educational purposes, transfer learning, or other uses. You can experiment further by switching between variants of MobileNet. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as. I had more luck running the ssd_mobilenet_v2_coco model from the TensorFlow model detection zoo on the NCS 2 than I did with YOLOv3. Image classification using MobileNet. I've trained a model with a custom dataset (Garfield images) with Tensorflow Object Detection API (ssd_mobilenet_v1 model) and referring it in the android sample application available on Tensorflow repository.