The CIIGAR Logo: a dog holding a cigar, leaning over the word CIIGAR, looking awesome

Canine Instruction with Instrumented Gadgets Administering Rewards

For thousands of years, humankind and canines have lived together across the world. Using a very limited vocabulary communication, humans and dogs have been able to accomplish incredible things. Aside from being cherished pets and family members, dogs play mission-critical roles in society. Explosive detection dogs regularly protect the lives of our military service members, search and rescue dogs find missing persons and survivors of disasters, and guide dogs help people with visual impairments navigate. From diabetes detection, to PTSD support, and guiding the blind, canines do incredible things on a daily basis.

Canines and humans communicate in very different ways. Whereas humans are primarily vocal communicators, dogs are postural and behavioral communicators, which makes these collaborative accomplishments even more impressive. In the last couple of decades, the ubiquity and evolution of computers have altered nearly every field that they have touched. Our drive is to explore the limitless possibilities that combining dogs and computers introduces.

Along these lines, we attack the problem in three different directions: 1) we examine how computers can be a tool to help interpret canine communication; 2) we explore how computers can be a tool help scaffold communication to canines; and 3) we leverage knowledge of successful human-canine communication to build more effective machine learning algorithms. All of these efforts require a combined knowledge applied behavior analysis (the field of study examining how learning occurs), veterinary behavior and medicine, electrical engineering, and computer science. The synergy of these fields outlines a wide space of projects that define a new frontier for animal-computer interaction.

This is a brief, very high-level, overview of what we do with canines in the CIIGAR Lab. We do a lot more too. Please feel free to contact Dr. Roberts if you are interested in this work, have comments or suggestions, or would like to know more.

Frequently Asked Questions

For the CIIGAR Lab, the iBionics Lab, the Veterinary Behavioral Medicine Service, and for our collaborators the health and well-being, both physical and emotional, of dogs is our highest priority. Many of us live with dogs as cherished members of the family. In the work we do, we never knowingly or intentionally cause harm or discomfort to the dogs we work with. Here are answers to some commonly asked questions.

  1. Where do you get the dogs you work with? We work primarily with privately-owned dogs. These dogs are pets, they live with their people, and come to work with us on a volunteer basis. Their owners are always present when we work with them, and we never keep dogs overnight.

  2. Isn't the harness too heavy for the dogs? Our smart harness can be configured in a variety of ways depending on what we are trying to accomplish. In some variants, it only weighs a few ounces more than a standard nylon harness. When outfitted with every capability we are working on, it weighs several pounds. Carried by medium- or large-sized dogs, the weight of the harness doesn't exceed 10% of their body weight which is safe for adult dogs.

  3. Isn't the harness too bulky to be useful? In short, yes! We're well aware of the fact that our prototype harness is too bulky and has too many snagging hazards to be useful outside of the controlled laboratory environment we do most of our work in. Miniaturization is an intense area of interest for us moving forward.

  4. What do the dogs feel when wearing the harness? The behavioral and physiological monitoring we do is "passive," meaning the dogs aren't shaved or stimulated in any way. All they feel when we're monitoring their posture, behavior, or physiology is the harness they are wearing, which feels like any other harness. In the case of providing haptic cues to the dogs, they feel gentle vibrations. These are similar to the feeling of your cell phone going off on vibrating alert. Unlike a cell phone, we have very fine-grained control over the intensity of the vibration, and can tune it to the level appropriate for each individual dog.

  5. How do you train the dogs you work with? We are outspoken proponents of positive-reward-based training. Everything we do with dogs involves lots of treats, toys, and attention from humans. Under no circumstances do we use aversive techniques, physical force, or electric stimulation of any sort to teach a dog.

  6. Is your work reviewed by veterinarians? Yes! One of our team members is a veterinary behaviorist. Additionally, all of our work is reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) at NC State. The IACUC's job is to assure that all uses of vertebrate animals in research are in accordance with the Animal Welfare Act of 1966 and subsequent amendments. You can find more information about the IACUC at NCSU by visiting their website (http://research.ncsu.edu/sparcs/compliance/iacuc/). They are also available to answer any questions you may have regarding this work.

Again, the welfare and wellbeing of the animals we work with is of paramount importance. Our vision for the technology we're developing is to provide a new and exciting way to improve human-canine communication which will hopefully solidify and increase the already incredible bonds we share. If you have any further questions regarding this work, please don't hesitate to contact the CIIGAR Lab director Dr. David L. Roberts.

Projects

Smart Handle

The smart handle project focuses on the non-visual communication of canine heart and respiratory rates. The main goal is to communicate canine physiological data to a blind handler in real time, to give him or her a better idea of the animal's current physical and emotional state.

Funding Source

NSF

People Involved

Knowledge Engineered Posture Detection

Traditionally, posture recognition algorithms require a set of labelled training data. This project explores the use of recognition algorithms that are based on skeletal models of the animals.

Funding Source

NSF

People Involved

Automated Training

We are developing a computer-based training system that collects Inertial Measurement Unit (IMU) information from dogs and analyzes it in a novel way to categorize postures. The IMU information is similar to the information a cell phone uses to orient its display from landscape or portrait. The posture categorization algorithm's speed is novel compared to existing classification techniques such that it provides real-time feedback to the dog. The dog then associations the desired behavior with the classified posture. Essentially the system and posture classification algorithms are used to create a semi-autonomous computer training system for dogs.

Funding Source

NSF

People Involved

Scent Discrimination

We are developing a novel cyber-physical system that collects physiological and behavioral data from a dog during scent-detection tasks, and that utilizes threshold classifiers to detect canine postures and discriminate ventilation events (sniffing, panting or normal breathing). This will enable the identification of patterns in physiological and behavioral signals that correlate to the presence of a target odor, which can be used in the development of an automated training system that can capture and reinforce a desirable behavioral response upon detection of scents.

Funding Source

NSF

People Involved

Smart America Challenge

A collaborative group, consisting of members of the CIIGAR lab and The Department of Electrical and Computer Engineering's Integrated Bionic Microsystems lab, participated in the Smart America Challenge, a NIST-sponsored event which connected research institutions and organizations to conceptualize and demonstrate next generation cyber-physical systems. As part of the Smart Emergency Response Systems (SERS) team, the group worked with organizations including Boeing, Mathworks, National Instruments, MIT Media Lab, University of Washington, Worcester Polytechnic Institute and University of North Texas. SERS was one of the 24 technical teams attended the challenge and was selected as one of the four highlighted teams to present at the White House on the day before the Smart America Expo. The team developed a smart harness capable of monitoring the physiological condition of a search and rescue dog, their working environment, and enabled bidirectional communication with the handler even when out of direct sight or earshot.

Funding Source

NSF

People Involved

Wearable Heart Rate Sensor Systems for Wireless Canine Health Monitoring

There is an increasing interest from dog handlers and veterinarians in an ability to continuously monitor dogs’ vital signs (heart rate, heart rate variability, and respiratory rate) outside laboratory environments, with the aim of identifying physiological correlations to stress, distress, excitement, and other emotional states. We are developing a non-invasive wearable sensor system combining electrocardiogram (ECG), photoplethysmogram (PPG), and inertial measurement units (IMU) to remotely and continuously monitor vital signs of dogs. To overcome the limitations imposed by the efficiently insulated skin and dense hair layers of dogs, we investigated the use of various styles of ECG electrodes and enhancements of these by conductive polymer coatings. We also studied the incorporation of light guides and optical fibers for efficient optical coupling of PPG sensors to the skin. Combined with our parallel efforts to use inertial measurement units (IMUs) to identify dog behaviors, these physiological sensors will contribute to a canine-body area network (cBAN) to wirelessly and continuously collect data during canine activities with a long-term goal of effectively capturing and interpreting dogs’ behavioral responses to environmental stimuli that may yield measurable benefits to handlers interactions with their dogs.

Funding Source

NSF

People Involved

Canine Inspired Machine Learning

This project looks at how users can train computers and virtual agents to perform tasks using discrete, non-numerical forms of communication, and takes inspiration from techiniques used in animal training. We have shown that people use many different strategies when providing feedback to a learner, and may use a lack of feedback to communicate in the same way that they use, for example, explicit rewards. We have developed the SABL algorithm, which allows learning agents to adapt to a user's particular training strategy, allowing it to learn more quickly than other approaches. We have also considered how natural language commands can be learned through positive and negative feedback.

Our work has also considered the different factors that influence how users give feedback when teaching virtual agents. We have looked at how a user's feedback depends on the structure of a task. For example, users may give more feedback when the learner reaches a doorway between different rooms than while it is crossing a room. We have also looked at how the speed at which an agent moves affects the amount of feedback given. We have shown that we can effectively adjust that speed to reflect the agent's confidence in its action, in order to elicit more feedback when needed.

Funding Source

NSF

People Involved

Continuous Reinforcement Learning Over Long Time Horizons

Significant progress has been made in applying reinforcement learning algorithms to contiuous, high dimensional domains. Algorithms such as fitted Q-iteration allow agents to learn about complex environments and plan their actions accordingly. These algorithms can struggle, however, when faced with tasks that require a large amount of time, and a large number of actions to complete. When using approximate representations of the learning problem, reinforcement learning algorithms can fail to accurately account for the long term effects of their actions. This project seeks to develop reinforcement learning algorithms that are specifically suited to long term planning. In particular, we are interested in modifications to fitted Q-iteration and fitted policy iteration with neural network representations.

Funding Source

NSF