Daily News Digest Featured News

Combining collaborative robots with AR, Ray project tackles real-time ML, computer vision works on home decor

narrative landscape machine learning

Top Stories




News Summary

Augmented Reality

UMD researchers’ augmented reality technology could help doctors in the operating room (DBK News)

A team of five physicians and researchers, including Amitabh Varshney, a computer science professor and Augmentarium director, publicly demonstrated augmented reality technology designed to assist in intubation — putting a tube down a patient’s airway — and ultrasounds to a small crowd at the Newseum on March 27. The software used in the demonstration, which ran on Oculus and Hololens headsets, was developed at the Augmentarium, Varshney said.

The demonstrated augmented reality technology projects real-time information from the ultrasound onto the user’s field of view, said Barbara Brawn-Cinani, associate director at the university’s Center for Health-related Informatics and Bioimaging. This allows medical staff to see ultrasound images, for example, at the same time they’re looking at the patient, rather than having to repeatedly look away at small screens displaying the images, Brawn-Cinani said.

Meitu’s BeautyPlus Introduces Augmented Reality to Selfies (PR Newswire)

Meitu, Inc. (SEHK: 1357), a leading mobile internet company with its apps installed on more than 1.1 billion unique devices worldwide, today announced the addition of Augmented Reality video and photo filters to its flagship selfie-editing app, BeautyPlus.

Some of the most popular BeautyPlus features include:

  • Advanced Facial Recognition and Skin Tone Detection
  • Skin Retouching and Smoothing
  • Eye and Smile Perfecting
  • AnimeCam
  • Magic Brush

Combining Collaborative Robots and Augmented Reality (AutomationWorld)

At the Universal Robots booth at Automate in Chicago, I had the opportunity see how collaborative robots could work together with humans where the augmented reality (AR) experience is delivered via instructions to guide the operator through the tasks that need to be completed. In some cases, the AR is delivered via textual instructions projected into the operator’s workspace. It is also delivered via flashing lights and shapes to direct the operator’s attention and actions in specific work areas.

How Salem’s Augmented Reality Exhibit is Actually a Social Mission (BostInno)

From May 27 to Nov. 30, the Salem Maritime National Historic Site will feature 10 augmented reality sculptures in a free exhibition that’s open to the public. The exhibits are going to be positioned throughout the park via GPS, and will be viewable using the augmented reality app Layer. For visitors that don’t have a smart device with them, there will be phones and tablets for checkout.

The work will be curated by five internationally-acclaimed artists: John Craig Freeman, Kristin Lucas, Will Pappenheimer, Mark Skwarek, and Tamiko Thiel.

Natick Researchers Say Augmented Reality May Help Soldiers (Natick Patch)

The team is investigating how augmented reality, or AR, may help soldiers improve their mission-planning skills.

Researchers say that as opposed to the 2-D approach, the AR-based mission planning with 3-D maps and models gives soldiers a way to tailor their plans to fit their preferences, and those on their team. According to the announcement, doing so could improve cognitive performance at both the individual and group level, leading to improved mission-planning outcomes and, ultimately, enhanced mission effectiveness.

EON Reality’s AVR Platform Makes Virtual Reality And Augmented Reality Application Development Easy And Strengthens The Man And Machine Connection (EON Reality)

EON Reality Inc., the world leader in Virtual Reality based knowledge transfer for industry, education, and edutainment, announces the release of new features for the EON AVR Platform that enhance key functionality and improves on Augmented and Virtual Reality application creation for non-technical users: IoT integration, enhanced features for Enterprise customers, enhanced multi-user support, and features to support spatial tracking for Microsoft’s Hololens and Google Tango.

Enterprise offerings include Enterprise Virtual Training (EVT) functionality, which allows for monitored virtual training by an instructor integrated with backend enterprise systems, and an enterprise version of the AVR platform designed for the needs of corporations. The AVR Platform also includes support for next generation tracking technologies which power markerless Augmented Reality. This enables the AVR Platform to position Augmented Reality objects in the middle of the room or add annotations to real world objects.

The EON AVR Platform has also add-on options for Artificial Intelligence and Geo-positioning to create unique knowledge transfer applications that illuminate the world with contextual information or Virtual Reality applications that let students and trainees learn faster, remember longer, and decide better.

Core Platform Components

  • EON Creator AVR 2.0 Enterprise and EON Creator AVR 2.0 Education easy content creation builder empowers non-technical users to create compelling AR and VR applications.
  • EON Coliseum AVR Multi-user allows users to be immersed and collaborate inside the same experience (AR or VR) and interact remotely through web with both content and each other’s via gaze, VOIP, VR controller.
  • The AR, VR, Mobile, and 360 Enterprise and Education framework support interactive 3D objects/environments and CGI or interactive 360 videos with AVR functionality
  • EON Experience AVR Library consists of VR and AR apps and models for various industries including Aerospace, Education (such as vocational training, anatomy, biology, geography, history, physics and astronomy), Edutainment, Energy, Government, Manufacturing, Medical, Real Estate, Retail, Security and Sports

avr platform

Gravity Jack Announces PoindextAR Technology, Unlocking AR For Any Real World Object (PR Web)

PoindextAR, a neural network-based computer vision technology revolutionizing how augmented reality is created and experienced, gives any device with a camera the ability to instantly understand, in real time, any real world object and the pose of that object — regardless of size, texture, movement, lighting conditions or other common augmented reality challenges.

PoindextAR is hardware agnostic and deployable to current mobile phones or augmented reality head mounted displays (HMD). This means that the technology is capable of not only understanding the presence of an object within a field of view, but also its precise position and placement within the scene.

Artificial Intelligence/Machine Learning

Machine learning is now used in Wall Street dealmaking, and bankers should probably be worried (Business Insider)

Included in the letter alongside talk of robotics, new design standards and a private cloud platform was reference to the Emerging Opportunities Engine. The letter said:

“We also use machine learning to drive predictive recommendations for Investment Banking. Last year, we introduced the Emerging Opportunities Engine, which helps identify clients best positioned for follow-on equity offerings through automated analysis of current financial positions, market conditions and historical data. Given the initial success of the Emerging Opportunities Engine in Equity Capital Markets, we are expanding it to other areas, like Debt Capital Markets, similarly basing predictions on client financial data, issuance history and market activity.”

Protagonist Platform launches to uncover customer narratives using machine learning (VentureBeat)

Today, Protagonist — a narrative analytics company that was formerly known as Monitor 360 — has announced the Protagonist Platform. It claims to be the first software platform specifically designed to analyze complex, cross-platform datasets to reveal the underlying beliefs and motivations of consumers.

To uncover those user-generated stories, the Protagonist Platform leverages the latest techniques, including machine learning and natural language processing (NLP).

“First, we use unsupervised machine learning to find similarities between different documents and chunks of text based on narrative patterns,” Aaron Harms, EVP of product, technology, and finance at Protagonist, told me. “This enables us to develop narrative hypotheses for a new discourse/conversation, and assess what documents are likely to be promoting the same underlying narrative. These narrative clusters are bound together by a range of features that capture the essence of a belief’s enduring logic and structure.”

narrative landscape machine learning

“In short, a series of unsupervised, semi-supervised, and supervised algorithms work in unison with our narrative experts to detect, cluster, and classify the clusters of narratives that exist in all kinds of natural language media content,” Harms said. “The marketer now has a data-driven understanding of the narratives that govern the beliefs of customers, constituents, and citizens, and an ability to track and model the ways in which these narratives unfold over time. These insights can be used to energize and protect a brand, understand and activate a target audience, and win a narrative battle that is playing out in the public discourse.”

Zorroa Launches Machine Learning Made Simple for the Enterprise (Yahoo! Finance)

Zorroa Corp. today announced the release of its Enterprise Visual Intelligence (EVI) platform, a unique, machine learning technology that provides visual search and business analytics for images, video, PDFs, and other media.

Zorroa EVI includes a powerful visual-processing pipeline that rapidly ingests assets from anywhere in the enterprise. It then incorporates machine learning, artificial intelligence, and other advanced vision algorithms to analyze them. The processors within the Zorroa environment are easily tuned to the specific needs of each customer, supporting a variety of tailored use cases, such as pinpoint similarity searches, facial recognition, handwriting analysis, and other scenarios.

Machine learning predicts the look of stem cells (Nature)

The project began about a year ago with adult skin cells that had been reprogrammed into an embryonic-like, undifferentiated state. Horwitz and his team then used CRISPR–Cas9 to insert tags in genes to make structures within the cells glow. The genes included those that code for proteins that highlight actin filaments, which help cells to move and maintain their shape. It quickly became clear that the cells, which were all genetic clones from the same parent cell, varied in the placement, shape and number of their components, such as mitochondria and actin fibres.

Computer scientists analysed thousands of the images using deep learning programs and found relationships between the locations of cellular structures. They then used that information to predict where the structures might be when the program was given just a couple of clues, such as the position of the nucleus. The program ‘learned’ by comparing its predictions to actual cells.

The deep learning algorithms are similar to those that companies use to predict people’s preferences, Horwitz says. “If you buy a chainsaw at Amazon, it might then show you chain oil and plaid shirts.”

The 3D interactive tool based on this deep learning capability should go live later this year. At the moment, the site shows a preview of how it will work using side-by-side comparisons of predicted and actual images.



Medable Launches Cloud-Based Machine Learning Solution for Healthcare (Yahoo! Finance)

MEDABLE Inc., the leading application and analytics platform for healthcare, today announced Cerebrum, the first cloud-based machine learning solution created specifically for healthcare apps. Cerebrum leverages data gathering smartphones with a first-of-its-kind machine learning engine, resulting in health events becoming more easily predicted, such as warning an elderly relative when he is at greatest risk of a fall, or preventing an asthmatic child from triggering a life threatening episode.

Cerebrum provides machine learning across the ecosystem of clinical study data, including standard clinical instruments and patient reported outcomes data, meta-data from mobile devices, connected devices, and genomic and epigenomic data. The adoption of cloud computing in the healthcare industry has been a slow process due to concerns around security, regulatory and compliance. Healthcare providers have also spent enormous effort on data-gathering and data-preparation, but they have struggled with using the data itself. Medable’s solutions address all of these concerns.

This Recycling Robot Uses Artificial Intelligence To Sort Your Recyclables (Forbes)

“The differentiator in Clarke is in its computer vision system,” said Horowitz. “The AMP Cortex inside the robot utilizes advances in deep learning that first emerged in 2012 and which allowed robots to understand the environment around them.”

Ray project tackles real-time machine learning (InfoWorld)

RISELab, the successor to the U.C. Berkeley group that created Apache Spark, is hatching a project that could replace Spark—or at least displace it for key applications.

Ray is a distributed framework designed for low-latency real-time processing, such as machine learning. Created by two doctoral students at RISELab, Philipp Moritz and Robert Nishihara, it works with Python to run jobs either on a single machine or distributed across a cluster, using C++ for components that need speed.

Python scripts submit and execute jobs, and Ray uses Python’s syntax features to express how objects and jobs operate. If you decorate a function with @ray.remote, that indicates the function can be executed asynchronously across the cluster. When the function is run, it immediately returns an object ID, which can be looked up later to obtain any finished result generated by the function. Ray’s documentation shows how Python’s list comprehensions can be combined to run a series of functions and return the results automatically.

How machine-learning startup Jemsoft turned a tragic situation into a viable business (ZDNet)

One Monday afternoon in April 2013, 19-year-old Jordan Green was working in a liquor store in Adelaide, Australia, when two men in balaclavas holding a shotgun entered the store, jumped the counter, held the gun to his head, and demanded his co-worker open the store’s safe.

Studying computer science and already working on a number of projects with Jemsoft co-founder Emily Rich, Green went to Rich with the idea of developing a platform that could prevent situations like the one he had found himself in.

In building the product, Green and Rich had to annotate hundreds of thousands of images and videos of armed hold-ups, which sometimes involved playing dress-ups and reenacting robberies.

Solido Launches Machine Learning (ML) Characterization Suite (Broadway World)

Solido Design Automation, a leading provider of variation-aware design and characterization software, today announced the immediate release of its Machine Learning (ML) Characterization Suite. This new product uses machine learning to significantly reduce standard cell, memory, and I/O characterization time, helping semiconductor designers meet aggressive production schedules.

SAS Launches Machine Learning Platform SAS Viya (Destination CRM)

SAS launched a machine learning platform, SAS Viya, and, with partner Cisco, the Cisco SAS Edge-to-Enterprise IoT Analytics Platform. Aimed at all skill levels, SAS Viya is a cloud-ready, scalable, open platform for modern machine learning that supplements SAS 9; it extends SAS 9’s capabilities with features including Visual Analytics for business users, Visual Statistics for business analysts, Visual Data Mining and Machine Learning for data scientists, and Visual Investigator for intelligence analysts.

Sage BotCamp to teach artificial intelligence skills to millenials (CBR Online)

Sage is targeting 16-25 year olds, including school leavers and millennials, with a new initiative looking to teach skills in artificial intelligence (AI) and bots.

The Sage Foundation plans to set up what it is calling a ‘BotCamp’, a programme that will facilitate the training of over one hundred young people within the specified age bracket, aiming to encourage the pursuit of skills that are crucial to the future.

4 machine learning breakthroughs from Google’s TPU processor (InfoWorld)

  1. Google’s TPUs, like GPUs, address division of machine learning labor – With Google’s TPU, phase three is handled by an ASIC, which is a custom piece of silicon designed to run a specific program.
  2. Google’s TPU hardware is secret — for now – Right now, Google is using this custom TPU hardware to accelerate its internal systems. The feature isn’t yet available through any of its cloud services. And don’t expect to be able to buy the ASICs and deploy them in your boxes.
  3. Google’s custom-silicon approach isn’t the only one – FPGAs can perform math at high speed and with high levels of parallelism, both of which machine learning needs at most any stage of its execution. FPGAs are also cheaper and faster to work with than ASICs out of the box, since ASICs have to be custom-manufactured to a spec.
  4. We’ve barely scratched the surface with custom machine learning silicon – Google claims in its paper the speedups possible with its ASIC could be further bolstered by using GPU-grade memory and memory systems, with results anywhere from 30 to 200 times faster than a conventional CPU/GPU mix.
Computer Vision/Machine Vision

Matrox to debut dot-matrix text reading technology at UKIVA (Packaging world)

Matrox Iris GTR is a rugged, IP67-rated camera measuring just 75mm x 75mm x 54mm, which allows it to fit into tight and harsh spaces. It uses On Semiconductor PYTHON CMOS image sensors with high readout rates and an Intel Celeron dual-core embedded processor.

Matrox 4Sight GPm is a vision controller featuring a unique combination of embedded PC technology, compact size, and ruggedness, which makes it an ideal solution for machine vision applications.

Grokstyle is putting computer vision to work on home decor with M in funding (TechCrunch)

The basic idea is this: you open an app or web interface, and upload or take a picture of, say, a chair or lamp you like. Any angle, any style. The Grokstyle service immediately returns the closest matches, either including the object itself or ones very like it.

DSP Group and Emza Visual Sense Partner to Create Commercially Available Ultra-Low Power, Always-On Vision Sensor for Residential Security and Smart Buildings (Globe Newswire)

DSP Group®, Inc. (NASDAQ:DSPG), a leading global provider of wireless chipset solutions for converged communications, and Emza Visual Sense, announced today the industry’s first battery powered intelligent always-on visual sensor specifically designed to overcome the power and cost constraints of computer vision processing for residential security and smart buildings applications.

The WiseEye IoT sensor solution is purpose-built from the ground up with always-on, low power visual sensing in mind. Partnering with Himax’s unique low power CMOS Imager Sensor, Emza’s unique machine vision algorithms and DSP Group’s ultra-low power always-on processor, the result is a commercially available battery operated module, capable of detecting, tracking, classifying and understanding the context of its environment in an extremely efficient manner using only a few mill watts of power and allowing years of battery life with a standard battery.

“Computer Vision based sensors enhance the intelligence and functionality of any device, and our WiseEye IoT solution aims at bringing these capabilities to the vast and largely untapped IoT space,” said Yoram Zylberberg, CEO of Emza Visual Sense. “In DSP Group we found a leading expert in low power and high volume always on processors, and the ideal partner to realize the industry’s first commercially available always-on intelligent visual sensor.”

MotionDSP Launches Ikena Cloud API for AWS-Based Video Processing (Globe Newswire)

MotionDSP, a leading provider of image processing and computer vision software for video, today announced the launch of Ikena® Cloud, the company’s new API for cloud-based processing of video files and live video streams.

Ikena Cloud uses MotionDSP’s suite of image processing and computer vision libraries from its flagship Ikena ISR real-time video processing application, and delivers them in a scalable, Amazon Web Services (AWS)-based service via a REST-API.

Gebo Cermex to Show “Smart Machine” Vision (Packaging Europe)

The ‘smart packing machine’ on the Gebo Cermex booth at interpack 2017 will feature four main innovative modules.

Utilizing Rockwell Automation’s iTRAK® technology and capable of achieving speeds of up to 400 products per minute depending on package size, shape and weight, the CareSelect system easily surpasses traditional ‘endless screw’ collation systems in terms of bottle integrity and protection.

Featured on booth C47 (Hall 13) too, will be FlexiLoad™, the reliable robotic solution for magazine loading, suitable for any case packing system regardless of type and speed. This eliminates the need for time-consuming, manual corrugated board magazine feeding and, importantly, the potential for operators’ musculoskeletal disorders (MSDs).

This latest version of the WB46 will feature the company’s brand-new, user-friendly human machine interface (HMI), which is based on an intuitive, tablet-approach navigation and offers rich media tools for preventive maintenance procedures.

Also available as part of the smart packing solution, is the company’s Equipment Smart Monitoring (ESM) system which connects to the machine in order to read, transmit and organize performance data into a coherent dashboard. This system helps customers maximize the efficiency of component machines within their packaging lines (OEE): by operating at individual machine level and collecting a continuous stream of data via a connected measuring device, ESM gathers and analyses data to generate a number of key indicators.

Teledyne DALSA Introduces Infrared Camera Series for Industrial Vision Applications (EIN News)

today launched its Calibir GX series of long wave infrared (LWIR) cameras for industrial vision applications. Built to achieve frame rates of up to 90 fps, the shutterless and small form-factor GX series are ideal for non-destructive testing in applications that include food inspection, parts and packaging, and electronics inspection.


Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *