4583

Add an extra layer of pet-immunity to conventional motion-sensing system.

Cover-Story:With installed motion detectors at home, you expect an instant response to intruders. If an intruder steps over the threshold or breaks in through the window, an alarm will be triggered, and you can rest assure that any high-quality alarm device can handle this task. But responding to motion is only half the battle: an important quality of motion-detector is the ability to determine what exactly is moving. It must quickly and accurately detect a person, while also ignoring the natural interference and off-course our beloved pets, and to be completely sure if the security system has activated the alarm, the threat is real and needs immediate response.

Traditional Installation Conditions:
1. Make sure that the motion detector is installed at the optimum height.
2. Set the detector to appropriate sensitivity.
3. Make sure that the pet-animal cannot approach the lens of the room detector.
4. Setting sensor to particular pulse-count based on user experience and anomaly-response.

When classifying a pet-friendly motion sensor, many companies will state that the sensor “provides pet immunity for pets up to X pounds”, 'and what beyond X'!!. This means that pets smaller than the value stated are less likely to cause enough motion to activate the sensor. However, it may still be possible for the pet to activate the sensor if it moves directly in front of it or if the pet is moving around very quickly. Motion sensors that are mounted at too low of a height are usually more prone to being activated by pets.

Proposed Solution:
The MAX-Motion project provides unique solution to the above mentioned challenges, such that an ordinary motion system can be enhanced comparatively w.r.t. the expensive solutions existing in the home-security segment. MAX-Motion utilizes on-chip ml-inferencing & classification capabilities of MAX78000 CNN processor besides motion sensor to provide intelligent pet-immunity in motion-sensing system.
Hardware System:
MAX78000FTHR development board: Arm Cortex M4F processor with an integrated Convolutional Neural Network accelerator, which is targeted at power-optimized AI applications running at the edge (such as in a home, deployed in an industrial setting, or in a vehicle).
mikroE Relay Click:
Current version of MAX-Motion drives two individual relays (Omron G6D-5V); one in latch-on and other in latch-off state, that can be utilized to trigger lights/Alarm/IP (Intrusion-Protection) system.
An HC-SR04 motion sensor for our PoC, that has further been modified to adhere with the 3.3V level of MAX78000FTHR board.

Software System:

Maxim Integrated has provided great Eclipse driven support for MAX78 similar to previous Processor line (MAX32660 and 670) for programming and de-bugging (including MinGW) and has provided detailed instructions for installation and support at their github repo.

Fiddling with MAX78000FTHR:
While the board comes with pre-trained kws demo example, it was quite worth to explore each and every example, which also led to accidental "device-lockout", but through provided instructions on "How to Unlock a MAX78000 That Can No Longer Be Programmed", the device was restored in it's normal state. Some tests were done with CameraIF example, to capture live stream and display through python img_converter utility. For understanding the I/O architecture of the board GPIO example was modified as per the project-need.

System Working:
MAX-Motion keeps on polling digital pin[P1_6] as input, that has been connected to sensor_data pin of PIR sensor. In case the sensor registers motion activity, [P1_6] further triggers on-board camera to capture-frame within field-of-view and converts to suitable format with input size of 64x64 pixels RGB which is 3x64x64 in CHW format, that is fed to on-board pre-trained Pet Classifier CNN(Cats and Dogs), based on the inference result array difference, tri-class state is determined (first class consists of Cat with <90% accuracy, second class consists of Dog with <90% accuracy and the third specified as Human-Motion). The inference output has been assigned to gpio [P3_1] and [P0_16] respectively for relay outputs.
For 3.3V output at specific I/O(mxc_gpio_cfg_t): [gpio_output.vssel = MXC_GPIO_VSSEL_VDDIOH;] //Added.

The Asirra data-set
Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords.
Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun!
Asirra is unique because of its partnership with
Petfinder.com, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research.

Image recognition attacks
While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459.

Cats vs Dogs
The Dogs vs. Cats dataset is a standard computer vision dataset that involves classifying photos as either containing a dog or cat.

Although the problem sounds simple, it was only effectively addressed in the last few years using deep learning convolutional neural networks. While the dataset is effectively solved, it can be used as the basis for learning and practicing how to develop, evaluate, and use convolutional deep learning neural networks for image classification from scratch.
This includes how to develop a robust test harness for estimating the performance of the model, how to explore improvements to the model, and how to save the model and later load it to make predictions on new data.

The training archive contains 25,000 images of dogs and cats. By training algorithm on these files, the system predicts the labels: (1 = dog, 0 = cat).
Kaggle Dataset link

And, Luckily for our scope of work Maxim has provided Cats and Dogs Demo project with live classification on board MAX78000 besides FaceID, MNIST stream, keyword spotting, CIFAR-100 examples and user-data specific training and synthesis procedure for rapid prototyping of AI concepts for true edge deployment. Great efforts have been put by Elektor for simplifying the CNN deployment along with board-specific tricks in their website follow-up documentation.

Project Github link.

References:
Maxim Integrated Documentation: MAX78000FTHR Evaluation Kit and Data SheetMaxim Integrated AI GitHub projectElektor Article: C. Valens, "AI at the Edge: Getting Started with the MAX78000FTHR", ElektorMagazine.com, 2021.Elektor Article: L. Lemmens, "AI with the MAX78000: Hardware Essentials," ElektorMagazine.com, 2021.Elektor Article: M. Claussen, "Making Coffee with the MAX78000 and Some AI", ElektorMagazine.com, 2021. Elektor Article: M. Claussen, "Making Coffee with the MAX78000 and Some AI (part 2)", ElektorMagazine.com, 2021.Webinar:  “MAX78000FTHR — A Platform for Innovation”