Advanced science.  Applied technology.

Search

Fine Motor Skills Learning and Refinement Through Machine Learned Exemplars, 10-R6016

Principal Investigator
Inclusive Dates 
01/01/20 to 04/01/21

Background

In recent years, machine learning and artificial intelligence techniques have been used in both academia and industry to create Intelligent Learning Environments that provide support to move beyond problem solving and into teaching and improving skills. These technologies can allow for accelerated learning processes, offering adaptive, autonomous, and individualized feedback for trainees. Being able to accurately assess motions against an expert exemplar can be used to train athletic movements, human-robot Interaction, and fine motor skills. For this research, American Sign language (ASL) was chosen as a fine motor skill task with clearly defined goals and easily obtainable ground truth.

Approach

The initial steps of the system, both in development and deployment, were to capture data of a subject performing ASL signs and convert that to kinematic representations. First, the sign was captured using cameras and the captured motion was processed using a pose estimation model, a machine-learning model for identifying keypoints of the observed body, in this case the joints of both hands. The keypoints were then converted into a 3D representation using multi-view geometry or another machine learning model. From the 3D keypoints, an inverse kinematic optimization algorithm estimates the kinematic parameters (e.g., joint angles) of the hand. The set of these kinematic parameters in time constitutes the kinematic representation of a motion.

Using this process, a system was trained on video data of 4 human experts performing 25 selected signs which overlapped with two available datasets online. This combined dataset served as exemplar motions that were then encoded via one-shot learning embedding. During training, the embedding was tasked to solve an optimization to ensure that similar motions were grouped closely to each other in the embedding space while dissimilar motions were farther apart. The result was the ability to reduce the dimensionality of a kinematic representation of a motion to a smaller space.

When a trainee employed the system, their motions were mapped to the exemplar embedding space, and the nearest exemplar was found. The system then solved an optimization for the necessary modifications to the novice motion that would move it closer to the exemplar motion. The output of this was a kinematic representation that was communicated to the user with a 3D avatar. Figure 1 shows the overall workflow of this process.

workflow diagram illustrating exemplar embedding space

Figure 1: Generating Exemplar-Based Feedback Concept

Accomplishments

Scatter plot showing exemplars in learned 2D embedding.

Figure 2: Scatter plot showing exemplars in learned 2D embedding.

Our learned feature embeddings, displayed in Figure 2, satisfy the desired properties of our encoding space. In most cases, classes are clearly separated as expected, and those groupings that are closer to each other do share many similarities. In general, we observed that the confused signs were often those which share similar hand shapes, which was expected given the way our system was representing data. While we did account for the rotation and translation of the hand in the capture space, we did not consider other information for context, such as the position of the signer’s face or body. To evaluate the accuracy of our classifier, we applied a k-Nearest Neighbor (kNN) classification. After producing confusion matrices, we found our kNN classifier established a 95.3% accuracy, exceeding our stated goal of 90% accuracy.

(function($){Drupal.toggleFieldset=function(fieldset){var $toggle=$($(fieldset).find('[data-toggle=collapse]').data('target'));if($toggle.length){$toggle.collapse('toggle');}};Drupal.collapseScrollIntoView=function(node){var h=document.documentElement.clientHeight||document.body.clientHeight||0;var offset=document.documentElement.scrollTop||document.body.scrollTop||0;var posY=$(node).offset().top;var fudge=55;if(posY+node.offsetHeight+fudge>h+offset){if(node.offsetHeight>h){window.scrollTo(0,posY);} else{window.scrollTo(0,posY+node.offsetHeight-h+fudge);}}};Drupal.behaviors.collapse={attach:function(context,settings){$('fieldset.collapsible',context).once('collapse',function(){var $fieldset=$(this);var $body=$fieldset.find('> .panel-collapse');var anchor=location.hash&&location.hash!='#'?', '+location.hash:'';if($fieldset.find('.error'+anchor).length){$fieldset.removeClass('collapsed');$body.removeClass('collapsed');} var summary=$('');$fieldset.bind('summaryUpdated',function(){var text=$.trim($fieldset.drupalGetSummary());summary.html(text?' ('+text+')':'');}).trigger('summaryUpdated');var $legend=$('> legend .fieldset-legend',this);$('').append($fieldset.hasClass('collapsed')?Drupal.t('Show'):Drupal.t('Hide')).prependTo($legend);$fieldset.append(summary).on('show.bs.collapse',function(){$fieldset.removeClass('collapsed').find('> legend span.fieldset-legend-prefix').html(Drupal.t('Hide'));$body.removeClass('collapsed');}).on('shown.bs.collapse',function(){$fieldset.trigger({type:'collapsed',value:false});Drupal.collapseScrollIntoView($fieldset.get(0));}).on('hide.bs.collapse',function(){$fieldset.addClass('collapsed').find('> legend span.fieldset-legend-prefix').html(Drupal.t('Show'));$body.addClass('collapsed');}).on('hidden.bs.collapse',function(){$fieldset.trigger({type:'collapsed',value:true});});});}};})(jQuery);var advagg_end="59bbc2e6"