{"id":4665,"date":"2019-10-16T03:25:00","date_gmt":"2019-10-15T18:25:00","guid":{"rendered":"http:\/\/couger.co.jp\/news-en\/?p=4665"},"modified":"2022-12-01T02:30:18","modified_gmt":"2022-11-30T17:30:18","slug":"facebook_open_eds","status":"publish","type":"post","link":"https:\/\/couger.co.jp\/news-en\/2019\/10\/16\/facebook_open_eds\/","title":{"rendered":"Couger wins third place in Facebook Research's VR\/AR Eye Tracking Accuracy Competition"},"content":{"rendered":"<p>Couger Inc. is pleased to announce that our AI model by Devanathan Sabarinathan and Dr. Priya Kansal won third place in the \"<strong>OpenEDS Challenge<\/strong>\" hosted by Facebook, and their paper on the AI model was accepted for publication at the <strong>ICCV<\/strong>, the world's top computer vision conference. This competition is for the accuracy of AI models that track people's gaze and eye movements, a technology that is expected to improve the performance of smart glasses such as VR\/AR.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/couger.co.jp\/news\/wp-content\/uploads\/2019\/10\/\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8-2019-10-15-14.21.32-300x191.png\" alt=\"\" class=\"alignnone  wp-image-1362\" srcset=\"https:\/\/couger.co.jp\/news\/wp-content\/uploads\/2019\/10\/\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8-2019-10-15-14.21.32-300x191.png 300w, https:\/\/couger.co.jp\/news\/wp-content\/uploads\/2019\/10\/\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8-2019-10-15-14.21.32-1024x652.png 1024w, https:\/\/couger.co.jp\/news\/wp-content\/uploads\/2019\/10\/\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8-2019-10-15-14.21.32-768x489.png 768w, https:\/\/couger.co.jp\/news\/wp-content\/uploads\/2019\/10\/\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8-2019-10-15-14.21.32-1536x978.png 1536w, https:\/\/couger.co.jp\/news\/wp-content\/uploads\/2019\/10\/\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8-2019-10-15-14.21.32-2048x1304.png 2048w\" sizes=\"auto, (max-width: 694px) 100vw, 694px\" width=\"694\" height=\"442\"><\/p>\n<h3>Background<\/h3>\n<p>The spread of AR\/VR has increased the demand for eye tracking, which tracks eye gaze (where the wearer is looking) and eye movements when wearing smart glasses. While hardware specifications for smartphones and other devices have evolved to the point where they can enjoy high-load processing such as gaming and video viewing as a matter of course, CPU performance is still limited. Therefore, VR\/AR hardware similarly requires distributed processing in the cloud and edge computing to operate in any environment, regardless of people or environment.<\/p>\n<p>Deep learning has already produced success stories in the area of eye tracking. However, <strong>due to hardware resource limitations, machine learning solutions face challenges in terms of real-time performance<\/strong>.<\/p>\n<p>Furthermore, creating a stable and efficient machine learning solution requires the acquisition of large amounts of accurate training data from thousands of users in different environments. However, collecting such data is impractical and expensive.<\/p>\n<p>Against the backdrop of these issues, Facebook, the provider of the Oculus Store, the fundamental business of VR, which is already estimated to have sales of over 10 billion yen, sponsored a competition for the accuracy of AI models.<\/p>\n<h3>Competition Overview<\/h3>\n<p>The OpenEDS Challenge, sponsored by Facebook, presents the two aforementioned challenges.<\/p>\n<ol>\n<li>Semantic Segmentation Challenge: Eye Position estimation in 2D Images<\/li>\n<li>Synthetic Eye Generation Challenge: Efficient data generation<\/li>\n<\/ol>\n<p>The Couger team participated in the 1st \"Semantic Segmentation\" challenge and placed 3rd.<br \/>\nEye tracking requires accurate recognition of 2D images. This means that important eye regions (sclera, iris, and pupil) must be demarcated pixel by pixel from the rest of the eye.<br \/>\nThe ideal solution is accurate, stable, and resource efficient. Therefore, the challenge was judged in <strong>terms of model accuracy and lightweight model size<\/strong>.<\/p>\n<p>The following were recommended for this challenge.<\/p>\n<ol>\n<li>Semantic segmentation that is accurate and generalizable<\/li>\n<li>Training focused on natural recognition of the human eye region using the OpenEDS dataset*1<\/li>\n<li>Balance between accuracy and model complexity<\/li>\n<li>Use of data synthesis techniques such as UnityEyes and NVGaze<\/li>\n<p>*1 OpenEDS dataset: A dataset of eye images collected by a VR device with two attached cameras facing the eye side, provided by Facebook.<\/ol>\n<h3>About \"EyeNet,\" a proprietary model developed by Couger<\/h3>\n<p>EyeNet is based on SkeletonNet, a model for skeletal recognition developed and presented by Couger at another top conference, CVPR, held in July 2019 in the United States. The difficulty was to <strong>keep the model lightweight while maintaining the high accuracy<\/strong> required by the OpenEDS Challenge (model size under 2MB and number of parameters under 400,000 were the requirements for this competition).<br \/>\nWhile the top rankers in this competition mainly focus on devising data pre-processing methods to improve recognition accuracy, the Couger achieved higher accuracy by <strong>using multiple attention mechanisms*2 , a method to determine which part of the input data to focus on<\/strong>, and by <strong>uniquely designing the model itself by combining methods from the \"Residual Network \"*3 , a world-class accuracy neural network model<\/strong> in image recognition. Another key feature is that despite such a highly accurate model, it is ultra-lightweight.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/couger.co.jp\/news\/wp-content\/uploads\/2019\/10\/figure4-300x230.jpg\" alt=\"\" class=\"alignnone size-medium wp-image-1364\" width=\"300\" height=\"230\" srcset=\"https:\/\/couger.co.jp\/news-en\/wp-content\/uploads\/2019\/10\/figure4-300x230.jpg 300w, https:\/\/couger.co.jp\/news-en\/wp-content\/uploads\/2019\/10\/figure4-1024x785.jpg 1024w, https:\/\/couger.co.jp\/news-en\/wp-content\/uploads\/2019\/10\/figure4-768x589.jpg 768w, https:\/\/couger.co.jp\/news-en\/wp-content\/uploads\/2019\/10\/figure4.jpg 1530w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p>Photo: Actual \"EyeNet\" identification<\/p>\n<p><strong>Figures for the Couger-developed model \"EyeNet\"<\/strong> :<br \/>\nmIoU: 0.95112 (6.3% improvement)<br \/>\nModel Complexity: 258,021.00000 (38% improvement)<br \/>\nTotal score: 0.97556 (28% improvement)<\/p>\n<p><strong>Baseline model values<\/strong>:<br \/>\nmIoU: 0.89478<br \/>\nModel Complexity: 416,088.00000<br \/>\nTotal Score: 0.76240<\/p>\n<p>*2 Attention mechanism: A method for determining which parts of input data to focus on in natural language processing and image processing.<\/p>\n<p>*3 Residual Network: A model of neural network devised by Microsoft Research in 2015.<\/p>\n<p><strong>Reference Information<\/strong><br \/>\nOpenEDS Challenge Official Site<br \/>\n<a href=\"https:\/\/research.fb.com\/programs\/openeds-challenge\/#Announcement_of_the_Challenge_Winners\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/research.fb.com\/programs\/openeds-challenge\/#Announcement_of_the_Challenge_Winners<\/a><\/p>\n<p>EvalAI<br \/>\n<a href=\"https:\/\/evalai.cloudcv.org\/web\/challenges\/challenge-page\/353\/leaderboard\/1002\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/evalai.cloudcv.org\/web\/challenges\/challenge-page\/353\/leaderboard\/1002<\/a><\/p>\n<p>ICCV2019<br \/>\n<a href=\"http:\/\/iccv2019.thecvf.com\/\">http:\/\/iccv2019.thecvf.com\/<\/a><\/p>\n<p>\u3010Contact\u3011<br \/>\n<strong><a rel=\"noreferrer noopener\" href=\"https:\/\/couger.co.jp\/en\/contact.html\" target=\"_blank\">Inquiry Form<\/a><\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Couger Inc. is pleased to announce that our AI model by Devanathan Sabarinathan and Dr. Priya Kansal won third place in the \"OpenEDS Challenge\" hosted by Facebook, and their paper on the AI model was accepted for publication at the ICCV, the world's top computer vision conference. This competition is for the accuracy of AI models that track people's gaze and eye movements, a technology that is expected to improve the performance of smart glasses such as VR\/AR. Background The spread of AR\/VR has increased the demand for eye tracking, which tracks eye gaze (where the wearer is looking) and eye movements when wearing smart glasses. While hardware specifications for smartphones and other devices have evolved to the point where\u2026<\/p>\n","protected":false},"author":1,"featured_media":1364,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[9],"tags":[],"class_list":["post-4665","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-press"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/posts\/4665","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/comments?post=4665"}],"version-history":[{"count":6,"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/posts\/4665\/revisions"}],"predecessor-version":[{"id":4739,"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/posts\/4665\/revisions\/4739"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/media\/1364"}],"wp:attachment":[{"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/media?parent=4665"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/categories?post=4665"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/couger.co.jp\/news-en\/wp-json\/wp\/v2\/tags?post=4665"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}