Daily News Digest Featured News

AR visor for surgical interventions, Autonomous spacecraft docking with AR, Custom image recognition with Salesforce

streaming analytics flow

Top Stories

AR

AI

CV

News Summary

Augmented Reality

How Augmented-Reality Glasses Work for Business (WSJ)

MR. OSTERHOUT: Inside you have two screens that essentially are photorealistic in terms of image quality. They adjust automatically to ambient conditions. You also have the lenses in front that either get dark or light, depending on the conditions around you, whether you are inside or out.

They had bought glasses, so they told the guys in Germany to turn them on. That allowed Pepsi’s people in the U.S. to see what the engineer in Germany was seeing when he went out on the line to investigate. The guys in the U.S. could say: “What you need to do is reset the breakers. Do this.” They could give instructions because they could see what [the engineer] was seeing halfway around the world in real-time. That’s killer.

With this technology—and lighting and stereo cameras above—they can go in and do the procedure guided by a computer. They can literally see CT scans and MRIs superimposed. They can cut the procedure time—you won’t believe it, but it’s going to be published—to 30 to 45 seconds from 30 to 45 minutes.

Savannah College of Art and Design Introduces Groundbreaking Augmented Reality College Experience (Yahoo! Finance)

Savannah College of Art and Design has pioneered the use of augmented reality technology to create the first AR-driven college catalog produced and designed entirely by the university. With more than 85 trackable pages including the cover, over 150 digital assets, and more than 200 micro-interactions, users of SCAD’s catalog can view videos of students’ creative sessions, play student-designed video games, tour residence halls, learn about degree programs, profile real-world careers, chat live with admission staff members and watch videos of SCAD’s signature annual events.

Emerson Brings Industrial IoT to Life with Integrated Operations and Augmented Reality at CERAWeek 2017 by IHS Markit

New automation and digitization technologies promise to deliver measurable business improvements to oil and gas companies. Emerson (NYSE:EMR) will demonstrate the “ROI from Big Data” investments at CERAWeek’s first-ever Technology Agora, which will explore the critical role of technology and innovation in the global energy future.

At the iOps Command Center, Emerson, which has decades of digital automation experience, will demonstrate how a real-world shale gas operation harnesses the power of Industrial IoT to expand digital intelligence to the entire manufacturing enterprise. It will provide a clear business case for Industrial IoT investment, with an emphasis on targeting specific business challenges and achieving measurable success with limited investment.

BMW’s i brand lets you preview your car in augmented reality (Autofocus)

Using a Tango-enabled device, of which there aren’t yet many, shoppers using the BMW I Visualiser app can explore the entirety of the i3 or i8, including opening the doors and trunk, and even “getting in” the car for a better look at the interior. And like many current car configurators, BMW’s app will let you see the car wearing different interior and exterior colours and wheels.

10 Unique Things You Can Do With Augmented Reality (YouTube)

Shazam Launches First Scaled Augmented Reality Solution for Brands Worldwide (Shazam)

Shazam’s new platform can bring any marketing materials to life—products, packaging, POS, advertising, events and more—just by utilizing the app to scan unique “Shazam Codes.” The codes are capable of delivering AR experiences including 3D animations, product visualizations, mini-games and 360-degree videos.

Volcanic launches world’s first Augmented Reality job board app (Onrec.com)

The app lets jobseekers pre-load a CV to enable one-click applications on the spot. Point Jobs also allows employers and recruiters to add their roles to the job board for free and drop a pin on a map where the vacancy is being advertised.

European scientists developing new Augmented Reality visor to improve accuracy of surgical interventions (News-Medical)

The VOSTARS (‘Video Optical See-Through Augmented Reality surgical System’) medical visor is a head-mounted display (HMD) system that is capable of superimposing the patient’s x-ray images in perfect 3D unison with their anatomy.

The visor also presents a patient’s anaesthetic data, heart rate, body temperature, blood pressure, and breathing rates, conveniently into the surgeon’s field of vision, in a drive to increase accuracy by focusing on the operation and reduce time by never having to look away.

The project forecasts a significant improvement of the intervention accuracy coupled with a reduction in times spent in an operation and under anaesthetic by at least 11%.

augmented reality surgical system

Cat LIVESHARE Provides Real-time Remote Support Using Augmented Reality (For Construction Pros)

CAT LIVESHARE delivers the industry’s first augmented reality-based live video calling platform for members of the Cat dealer network to conduct real-time remote support, training and equipment maintenance. Hundreds of users across six Cat dealers are already using the software worldwide in countries including the United States, South Africa, Australia, Denmark and Canada.

The platform delivers the unprecedented ability to save time and money by combining augmented reality with live video streaming, voice, 3D animation, annotation, screen sharing, and white-boarding to simulate the effectiveness of having the expert looking over your shoulder guiding you on what to do step-by-step.

Robb Report launches new augmented reality app (Mumbrella)

Lifestyle and luxury magazine Robb Report has launched a new augmented reality app named ‘The Experience’.

360 degree walk-throughs, videos, 3D augmented reality content and extra material are all features of the application available to readers.

One of the interactive pages will allow readers to try on jewellery and design and see inside their own Ferrari by deciding the colour trim and dissecting the engine.

Computer History Museum Enhances Visitor Experience with New Augmented Reality Mobile Tour (Yahoo! Finance)

The Computer History Museum (CHM) today announced a new, enhanced augmented reality public mobile tour of its landmark “Revolution: The First 2000 Years of Computing” exhibition. Leveraging Google’s Tango technology with GuidiGo’s new augmented reality (AR) platform, the museum experience is now enhanced with additional AR content powered by Tango-enabled phones. This 3-D mapping and AR technology provides new ways for visitors to engage with “Revolution,” allowing them to view real-size 3-D animations of artifacts, information, insider stories and gallery directions.

This one-hour tour features 31 of the most important artifacts and stories from the exhibition’s 1,000+ unique artifacts. Visitors can use Lenovo Phab 2 Pro powered by Tango to experience a virtual demonstration of the popular Babbage Engine, one of the first automatic computing engines.

Artificial Intelligence/Machine Learning

Artificial intelligence being turned against spyware (Horizon)

Artificial intelligence teaches computers to spot malicious tinkering with their own code.

Next year, VisionTechLab plans to release its first AI security services for banks and governments. In the longer term, Dr Maia sees applications for individuals.

Machine-Learning Algorithm Predicts Laboratory Earthquakes (MIT Technology Review)

Enter Bertrand Rouet-Leduc at Los Alamos National Laboratory in New Mexico and a few pals who have made a remarkable discovery. They’ve trained a machine-learning algorithm to spot the tell-tale signs that a laboratory earthquake is about to give way using only the sounds it emits under strain.

Rouet-Leduc and co’s work may change that. They created artificial earthquakes in their lab by pulling on one block sandwiched between two others. At the interface between the blocks, they packed a mixture of rocky material, called gouge material, to simulate the properties of real faults.

They’ve recorded the acoustic emissions from the experiment and fed them into a machine-learning algorithm. The idea was to see if the machine could decipher some pattern that geologists had so far missed. And indeed it did.

To their astonishment, the machine gave accurate predictions even when an earthquake was not imminent. “We show that by listening to the acoustic signal emitted by a laboratory fault, machine learning can predict the time remaining before it fails with great accuracy,” they say.

machine learning earthquakes

Machine-Learning Software That Aims to Predict Successful Startups (MarketWatch)

PreSeries is a tool for investors that uses machine-learning algorithms to predict the success of startups in their very early stages.

How NASA’s Raven Module Is Developing Machine Learning Algorithms for Autonomous Spacecraft Docking (All About Circuits)

Raven is a technology module designed to test and advance the technologies necessary to service an orbiting spacecraft. It gathers information on sensor data and machine learning algorithms and processes it to provide information to engineers back on Earth.

Raven will help scientists develop the estimation algorithms which are required to robotically approach and dock with an orbiting satellite and achieve in-flight service.

Raven will observe all the visiting vehicles from almost 1km away and gather the vehicle trajectory data with a separation distance of approximately 20 meters. These observations will be used to advance the required real-time algorithms. Since the data is obtained from several different vehicles, it will be possible to understand the sensitivity of the developed algorithms to variations in the size and shape of the vehicles. Since at least five rendezvous dockings will be observed for each vehicle, scientists will be able to develop statistics which can predict the performance of the algorithms for a given vehicle size and shape.

Raven will test multiple sensors such as a visible camera with a variable field of view lens, a long-wave infrared camera, and a short-wave flash LiDAR. The data from these sensors will be collected at distances within 100 meters and with rates up to 1Hz. Since the sensors will be installed on the station rather than on each visiting vehicle, it will be possible to reuse the hardware for a large number of missions and reduce the costs considerably.

Raven NASA

The flash LiDAR illuminates the objects and analyzes the reflected light to find the distance of the reflective surface. Raven is able to do this at a rate of up to 30Hz.

Moreover, the LiDAR uses an optical switch to toggle between either a 20- or 12-degree transmit FOV. This can considerably increase the signal-to-noise ratio of the LiDAR readings.

A SpaceCube 2.0 Engineering Model is the processing unit of the Raven. It employs three Xilinx Virtex-5 FPGAs where one of these FPGAs acts as the main board controller and the other two are the host to run the application-specific codes which implement the Raven flight software.

Machine Learning Algorithms Enhance Predictive Modeling of 2D Materials (nersc.gov)

Using a modeling framework built around a molecular dynamics code (LAMMPS), the research team ran a series of simulations to study the structure and temperature-dependent thermal conductivity of stanene, a 2D material made up of a one-atom-thick sheet of tin. This work, which involved a set of parameters known as the “many-body interatomic potential” or “force field,” yielded the first atomic-level computer model that accurately predicts stanene’s structural, elastic and thermal properties.

In addition, by applying machine learning algorithms, the research team was able to achieve these characterizations much faster than ever before.

The researchers used supervised machine learning algorithms such as Genetic Algorithms combined with Simplex for optimization—algorithms that have been around for 15-20 years and are commonly used in industries such as banking and social media.

Using machine learning to manage app privacy settings (iapp)

Using machine learning, the technology — which so far only works on “rooted” Android phones — helps predict a user’s privacy preferences and manages those across apps stored on the phone.

Using Artificial Intelligence Both In Apps And In The Aisles (Business2Community)

Most recently, Staples announced plans to implement Watson technology to bring to life its Easy Button. Infused with the technology, the button can now take Staples orders by voice, text, email, messaging app or mobile app.

In addition to Staples, the following merchants have partnered with IBM:

  • Macy’s: Macy’s has the On Call app, which combines Watson’s cognitive computing with location-based software to answer shoppers’ in-store questions, such as where a specific clothing brand is located. The program was tested in 10 stores through fall 2016.
  • Under Armour: The maker of high-tech activity apparel recently partnered with Watson to create an app that helps customers track their health and fitness activities, including sleep and nutrition. It in turn provides the users with coaching based on their data, as well as the results of other people who have similar health/fitness profiles.
  • 1-800-Flowers.Com: The digital florist and gift company tapped into Watson to create GWYN, a virtual gift concierge. GWYN can interpret questions such as “I am looking for a gift for my wife” and then ask related questions about the occasion and sentiment to make reliable suggestions.
  • The North Face: The outdoor-gear chain launched a Watson-powered digital shopping tool that presents online coat-shoppers with a series of questions, such as “Where and when will you be using this jacket?” The answers are used to generate relevant coat suggestions.
  • Sears: The 124-year-old department store chain is using Watson to boost one of its tried-and-true categories — tires. The AI-enabled app, called Digital Tire Journey, prompts the shopper with questions and matches the most appropriate tires with driver preferences.

Wearable artificial intelligence system that can understand the tone of a conversation (The Weekly Observer)

While someone is speaking, the system can analyze audio, text transcript and vital signs to determine the overall tone of the conversation with 83% accuracy. Through technical deep learning, the system can also provide a “sentiment score” for certain sections 5 seconds into a conversation.

Google uses AI to help diagnose breast cancer (KSL.com)

Google used a flavor of artificial intelligence called deep learning to analyze thousands of slides of cancer cells provided by a Dutch university. Deep learning is where computers are taught to recognize patterns in huge data sets. It’s very useful for visual tasks, such as looking at a breast cancer biopsy.

google deep learning breast cancer

How to Upgrade Judges with Machine Learning (MIT Technology Review)

In a new study from the National Bureau of Economic Research, economists and computer scientists trained an algorithm to predict whether defendants were a flight risk from their rap sheet and court records using data from hundreds of thousands of cases in New York City. When tested on over a hundred thousand more cases that it hadn’t seen before, the algorithm proved better at predicting what defendants will do after release than judges.

The algorithm assigns defendants a risk score based on data pulled from records for their current case and their rap sheet, for example the offense they are suspected of, when and where they were arrested, and numbers and type of prior convictions. (The only demographic data it uses is age—not race.)

Data Preprocessing vs. Data Wrangling in Machine Learning Projects (InfoQ)

A key task when you want to build an appropriate analytic model using machine learning or deep learning techniques, is the integration and preparation of data sets from various sources like files, databases, big data storage, sensors or social networks. This step can take up to 80 percent of the whole analytics project.

Data Preparation is the heart of data science. It includes data cleansing and feature engineering. Domain knowledge is also very important to achieve good results.

Data Cleansing puts data into the right shape and quality for analysis. It includes many different functions, for example the following:

  • Basics (select, filter, removal of duplicates, …)
  • Sampling (balanced, stratified, …)
  • Data Partitioning (create training + validation + test data set, …)
  • Transformations (normalisation, standardisation, scaling, pivoting, …)
  • Binning (count-based, handling of missing values as its own group, …)
  • Data Replacement (cutting, splitting, merging, …)
  • Weighting and Selection (attribute weighting, automatic optimization, …)
  • Attribute Generation (ID generation, …)
  • Imputation (replacement of missing observations by using statistical algorithms)

Data preparation occurs in different phases of an analytics project:

  • Data Preprocessing: Preparation of data directly after accessing it from a data source. Typically realized by a developer or data scientist for initial transformations, aggregations and data cleansing. This step is done before the interactive analysis of data begins. It is executed once.
  • Data Wrangling: Preparation of data during the interactive data analysis and model building. Typically done by a data scientist or business analyst to change views on a dataset and for features engineering. This step iteratively changes the shape of a dataset until it works well for finding insights or building a good analytic model.

Step 2 focuses on data preprocessing before you build an analytic model, while data wrangling is used in step 3 and 4 to adjust data sets interactively while analyzing data and building a model. This is also called ‘data wrangling’. Note that these three steps (2,3 and 4) can include both data cleansing and feature engineering.

data preprocessing data wrangling

The following are the user roles who participate in an analytics project:

  • Business Analyst: Expert in the business / industry with specific domain knowledge
  • Data Scientist: Expert in Mathematics, Statistics and Programming (data science / scripting); can write low-level code or leverage higher level tooling
  • Citizen Data Scientist: Similar to the data scientist, but more high level; needs to use higher level tooling instead of coding; depending on the ease of the tooling this can even be done by the business analyst
  • Developer: Expert in software development process (enterprise applications)

Some programming languages are built explicitly for data science projects or have strong support for it, specifically R and Python. They include various implementations of machine learning algorithms, preprocessing functions such as filter or extract, and data science functions such as scale, normalize or shuffle. The data scientist needs to do relatively low level coding to do exploratory data analysis and preparation.

These programming languages are built for the data scientist to prepare data and build analytic models, but not for enterprise deployment to deploy the analytic model to new data with high scale and reliability.

Data wrangling (also sometimes called data munging) is a simple, intuitive way of data preparation with a graphical tool. The focus of these tools is on ease-of-use and agile data preparation.

Data wrangling tools and visual analytics tools with inline data wrangling can be used by every user role: Business analyst, (citizen) data scientist or developers to speed up data preparation and data analysis.

streaming analytics flow

Coursera Partners Google, Offers Courses On Big Data And Machine Learning (TeleAnalysis)

With this collaboration Coursera is launching its first course in the Data Engineering on Google Cloud Specialization, “Big Data and Machine Learning.” This course will be the first in a 5-course Specialization. More foundational, intermediate, and advanced courses in infrastructure, machine learning, analytics, and application development are planned for launch soon.

TaleSpin Is Bringing Machine Learning and Chatbots to Physical Retail (The Marshall Town)

TaleSpin is sold as a software-as-a-service product to retailers, so they can add the technology to their stores at a very low cost, and it uses image recognition and machine learning to build a catalogue of the various products that are available in their store.

TaleSpin’s image recognition algorithm can identify things like sleeves and collars, patterns, and even use the visual to identify different occasions a piece of clothing would be used for. And all of these tags are hooks that the chatbot can make use of too.

Facebook’s new machine learning framework emphasizes efficiency over accuracy (InfoWorld)

Facebook’s AI research division (FAIR) recently unveiled, with little fanfare, a proposed solution called Faiss. It’s an open source library, written in C++ and with bindings for Python, that allows massive data sets like still images or videos to be searched efficiently for similarities.

The problem wasn’t only how to run similarity searches, or “k-selection” algorithms, on GPUs, but how to run them effectively in parallel across multiple GPUs, and how to deal with data sets that don’t fit into RAM (such as terabytes of video).

Faiss’ trick is not to search the data itself, but a compressed representation that trades a slight amount of accuracy for an order of magnitude or more of storage efficiency.

Machine-Learning App Aims To Boost Digital Personalization (MediaPost)

The new technology uses machine-learning algorithms that are continuously applied to determine, in real-time, how to personalize advertising creative and messaging to people while they are still in the market for a product or service.

Physicists extend quantum machine learning to infinite dimensions (PHYS.org)

Physicists have developed a quantum machine learning algorithm that can handle infinite dimensions—that is, it works with continuous variables (which have an infinite number of possible values on a closed interval) instead of the typically used discrete variables (which have only a finite number of values).

One of the biggest advantages of having a quantum machine learning algorithm for continuous variables is that it can theoretically operate much faster than classical algorithms.

Computer Vision/Machine Vision

MERLIC 3 machine vision software announced by MVTec (Vision-Systems)

MERLIC 3 software library comes with an OCR classifier that is based on deep learning and can be applied to a wide range of fonts. The new function, according to MVTec, offers unprecedented detection rates for number and character combinations, e.g., on workpieces, for reliable identification. Furthermore, the latest software release enables a more robust reading of dot print fonts.

Improved reading of bar and data codes is also included as part of the new release, which makes the recognition of blurry, overexposed, distorted, or low-contrast QR codes more robust. QR codes with uneven column widths can now also be read, and the software can also read partially occluded or partially defective barcodes.

Xilinx Demonstrates Responsive and Reconfigurable Vision Guided Intelligent Systems (PR Newswire)

Xilinx, Inc. (NASDAQ: XLNX) is demonstrating responsive and reconfigurable vision guided intelligent systems at Embedded World 2017. Conference presentations and demonstrations will show how Xilinx’s tools, libraries and methodologies infuse machine learning, computer vision, sensor fusion, and connectivity into vision guided intelligent systems.

Xilinx Conference Presentations

  • OpenAMP Within An Industrial IoT Framework
  • Heterogeneous Multicore for Safety: Why You Need Multiple Difference Cores on Your SoC
  • Virtualization Methodology for Real Time Physical System Self Optimization
  • Session 18: FPGA-SoCs – Managing Power for Performance with a Heterogeneous Processing System on a Chip
  • Designing Industry 4.0 Silicon Carbide High Power Inverters with Zynq-7000 and Zynq UltraScale+ MPSoC with Machine Learning Diagnostics Capability
  • Session 35: Embedded Vision – Combining OpenCV and High Level Synthesis to Reduce Embedded Vision Development Time

Xilinx Demonstrations – Stand 1/1-205

Driver Monitoring System– Presented by Xylon
The Xylon Driver Monitoring System, enabled on the Xilinx® All Programmable Zynq® SoC, is a mature, automotive qualified platform that provides the flexibility required to optimize system partitioning for vision processing performance at an affordable cost with a relatively low power profile.

Flying Camera System Using ADAS Framework—Presented by Xylon
This system utilizes the Zynq® UltraScale™+ MPSoC with quad-core ARM® Cortex™-A53 device and dual ARM Cortex-R5 cores, in addition to the Mail™-400 GPU for rendering the car model.

Hardware Acceleration of 4K60 Dense Optical Flow — Presented by Xilinx
This demonstration shows the power of Zynq Ultrascale+ MPSoC by running a state-of-the-art Dense Optical Flow algorithm at 4K resolution with 60 frames per second in the programmable logic.

Deep Learning, Computer Vision, and Sensor Fusion — Presented by Xilinx
This demonstration combines the three major, complex algorithms commonly used in vision-guided systems today including Convolutional Neural Network (CNN) for object detection or scene segmentation, Dense Optical Flow for motion tracking and Stereo Vision for depth perception, running on a single Zynq Ultrascale+ MPSoC device.

UHD HEVC Real-Time Compression and Streaming Playback – Presented by Xilinx
Experience low latency, real-time UHD video compression/decompression and transmission with the video codec hard block integrated into the Zynq UltraScale+ MPSoC EV devices. Supporting up to 8K video, the video codec enables low bandwidth data transfer with low latency for real-time video applications including drone operation and video conferencing.

Embedded Vision Robotics System with Accelerated Image Recognition – Presented by MLE
This industrial pick and place demonstration utilizes binary neural networks for machine vision recognition via programmable logic. Programmable logic implementation accelerates image recognition by over 1000x compared to software implementations while increasing recognition accuracy and reliability necessary for high performance, mission critical vision-enabled applications.

3D Object Recognition Industrial Camera for Machine Vision—Presented by iVeia
This demonstration highlights the heterogeneous architecture of the Zynq UltraScale+ MPSoC as it enables traditional PC-based algorithms on an embedded camera platform to accelerate time to market in machine vision applications.

Dobby AI Drone — Presented by ZeroTech and Deephi Deep Learning
The Zerotech Dobby AI is a pocket-sized drone that uses deep learning to detect human gestures powered by Xilinx Zynq SoC devices. This demonstration will also showcase the deep learning inference technologies from DeePhi.

Functional Safety Reference System—Presented by Ikerlan
This demonstration showcases a comprehensive design template for custom solutions with Hardware Fault Tolerance (HFT=1) on a single device, compliant to the IEC 61508:2010 norm using a Xilinx Zynq SoC device. Advanced support for a comprehensive PC-controlled error injection function will be presented using Xilinx’s new Zynq SoC Safety Reference System.

Intelligent Gateway for Cybersecurity—Presented by SoCe
This demonstration combines sensor fusion, cybersecurity, real-time communication and control with machine learning, providing high availability from the edge to the cloud based on the Xilinx Zynq SoC platform. The production-ready intelligent gateway is time-aware and provides secure networking interoperability and control demonstrating the convergence of Operational Technology with the Information Technology.

All Programmable Industrial Control Systems—Presented by BE Services, Matrikon, and RTI
This demonstration will showcase an Industrial Control System (ICS) providing a customer-ready platform showcasing network interoperability with secure edge analytics extendable to the cloud and powered by the All Programmable capabilities of the Zynq SoC and Zynq UltraScale+ MPSoC devices.

USB 3.0 Camera for machine vision applications (eeherald)

e-con Systems launched USB3.0 embedded camera module for applications such as Machine vision, barcode detection on moving objects and object tracking. The See3CAM_130, a 4K Autofocus USB 3.0 Camera is based on the 1/3.2-inch AR1335, a CMOS image sensor from On Semiconductor can produce Ultra HD (3840 x 2160) at 30fps and 4K Cinema (4096 x 2160) 30fps over USB 3.0 in compressed MJPEG format.

Salesforce launches custom image recognition as Einstein goes GA (PCWorld)

Einstein Vision, as it’s known, allows users to upload sets of images and classify them in a series of categories. After that, the system will create a recognizer based on machine learning technology that will identify future images fed into it.

The Einstein Vision APIs will allow customers to do things like create applications that know how to recognize different brands in photographs, or know how to spot pitched roofs versus flat ones. Developers can then take that custom recognizer and build it into applications.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *