Daily News Digest Featured News

Lampix wins SXSW accelerator award for interactive surface AR, DeepMind adds memory to AI, machine vision camera requires no programming

machine learning

Top Stories

AR

AI

CV

News Summary

Augmented Reality

Interactive Surface Developer Lampix AR/VR Winner of SXSW Accelerator Event (Next Reality)

Judges at the South by Southwest (SXSW) Accelerator Pitch Event named Lampix the winner in the Augmented and Virtual Reality category, conference organizers announced today.

Lampix, which masquerades as an ordinary lamp, projects an augmented reality interface onto any physical surface. Without glasses or other peripherals, Lampix users can digitally interact with analog documents and other objects.

Lampix, of course, includes what every lamp should—lights—via LEDs on the top and bottom. The rest is made up of a Raspberry Pi 3, projector, video camera, and HD camera. The projector displays the “screen” on the surface, while the cameras work to capture your movements that the software on the Pi 3 can translate. Lampix will offer an open API so developers can build their own apps for the platform.

Apple iPhone 8: Patent hints at 3-D selfies, augmented reality and driverless cars (CNBC)

Apple has been awarded a patent for advanced facial recognition technology boosting rumors about features in the upcoming iPhone 8 and hinting at future areas such as augmented reality (AR) and driverless cars.

Apple’s patent filing is for an “enhanced face detection using depth information”.

Apple has been awarded a patent for advanced facial recognition technology boosting rumors about features in the upcoming iPhone 8 and hinting at future areas such as augmented reality (AR) and driverless cars.

A number of reports have suggested bolstered recognition technology. Top Apple analyst Ming-Chi Kuo said in a note last month that the iPhone 8 would feature a camera and infrared module that can detect faces and even lead to 3D selfies which could then be used to replace a character’s face in a game, for example.

JPMorgan analyst Rod Hall published a report claiming that the iPhone 8 might forgo a fingerprint sensor and home button in favor of a 3-D scanner which could carry out biometric facial scanning. This could be used to unlock the phone as well as authenticate a user for the App Store or Apple Pay.

Apple’s patent filing is for an “enhanced face detection using depth information”. The Cupertino, CA-based firm has not mentioned the iPhone 8 in the document, but much of the technology outlined appears to mirror what the analysts have said.

The technology would work by scanning a scene via the camera, with the software able to detect human faces.

Chairish and Decaso Launch Augmented Reality (Architectural Digest)

The process is simple: snap a photo of your space, open the Chairish or Decaso app, select the item you’re eyeing, and click “view in my space” to superimpose the item onto your own photo. From there, you can adjust for scale, move the item around, and save the photo or share to your friends or a Pinterest board.

Augmented Reality Empowers Indonesian Women to Operate Online Stores (Next Reality)

A store owner, who can be located anywhere—from their living room to a public coffee shop—shows a customer their partner card. Using a dedicated app on their own smartphone, that customer scans the card to see a 3D retail store appear on their screen. From there, the customer enters that partner’s virtual store and can browse and purchase merchandise to have shipped to them. The store owner then gets a cut of the sale.

Slingshot, the Indonesian technology and media company that operates MindStores, announced this week that the store network has opened more than 7,000 stores in Indonesia since its launch last June.

Educating with augmented reality at South Marshall Elementary (ArkLaTex homepage)

Students are using a first-of-its kind program called Osmo which incorporates augmented reality in problem solving. Standard ways of learning of vocabulary and math transform into interactive games.

The tech-involved curriculum prepares them early for a future where a foundation in STEM gives them the advantage.

Artificial Intelligence/Machine Learning

DeepMind’s new algorithm adds ‘memory’ to AI (Wired UK)

When DeepMind burst into prominent view in 2014 it taught its machine learning systems how to play Atari games. The system could learn to defeat the games, and score higher than humans, but not remember how it had done so.

For each of the Atari games, a separate neural network had to be created. The same system could not be used to play Space Invaders and Breakout without the information for both being given to the artificial intelligence at the same time. Now, a team of DeepMind and Imperial College London researchers have created an algorithm that allows its neural networks to learn, retain the information, and use it again.

The group says it has been able to show “continual learning” that’s based on ‘synaptic consolidation’. In the human brain, the process is described as “the basis of learning and memory”.

To give the AI systems a memory, the DeepMind researchers developed an algorithm called ‘elastic weight consolidation’ (EWC).

To test the algorithm, DeepMind used the deep neural networks, called Deep Q-Network (DQN), it had previously used to conquer the Atari games. However, this time the DQN was “enhanced” with the EWC algorithm. It tested the algorithm and the neural network on a random selection of ten Atari games, which the AI had already proven could to be as good at as a human player. Each game was played 20 million times before the system automatically moved to the next Atari game.

deepmind

Artificial intelligence experts unveil Baxter the robot – who you control with your MIND (Express)

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University that allows people to correct robot mistakes instantly with nothing more than their brains.

Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect if a person notices an error as a robot performs an object-sorting task.

The team’s novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds.

A BEGINNER’S GUIDE TO MACHINE LEARNING ALGORITHMS (Dataconomy)

Now that we’ve covered what machine learning can do in terms of predictions, we can discuss the machine learning algorithms, which come in three groups: linear models, tree-based models, and neural networks.

machine learning

Discounter uses machine learning to stay ‘on Target’ with shopper demand (Chain Store Age)

By leveraging machine learning to tap into customer demand, the retailer is defining which touchpoints are not only valuable, but influencing its shoppers’ paths-to-purchase.

Incredible flying car with Artificial Intelligence could revolutionise transportation by letting you fly above traffic jams (Mirror)

Like something out of a sci-fi film, the high-tech concept uses ground and air ‘modules’ so that the vehicle can travel on roads and through the skies.

Passengers would plan their journey and book their trip via an easy-to- use app.

The capsule transforms itself into a city car by simply coupling to the ground module, which features a carbon-fibre chassis and is battery powered.

Artificial intelligence virtual consultant helps deliver better patient care (EurekAlert!)

The researchers used cutting-edge artificial intelligence to create a “chatbot” interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment.

In this research, deep learning was used to understand a wide range of clinical questions and respond appropriately in a conversational manner similar to text messaging.

Google’s new machine learning API recognizes objects in videos (TechCrunch)

At its Cloud Next conference in San Francisco, Google today announced the launch of a new machine learning API for automatically recognizing objects in videos and making them searchable.

Besides extracting metadata, the API allows you to tag scene changes in a video.

Machine learning reveals lack of female screen time in top films (New Scientist)

Technology that automatically detects how often men and women appear on screen reveals that in recent popular films, men have had almost twice as much screen time as women.

The software uses algorithms for face and voice recognition that have been trained on annotated video to identify whether a character is male or female, and can measure how long they are on screen to a fraction of a second.

Using algorithms for voice and face recognition that run in parallel, Narayanan’s program can analyse a feature film in less than 15 minutes. The software can also distinguish when one person is on screen but someone else is talking, says Narayanan.

ZyloTech, Formerly DataXylo, Announces First Artificial Intelligence Platform for Omni-Channel Customer Analytics (Yahoo! Finance)

ZyloTech today announced the first artificial intelligence-powered platform for advanced customer analytics. The platform enables companies to solve data quality issues and analyze all customer data continuously and in near real time for superior insights in support of omni-channel marketing operations.

The company’s award-winning A.I.-powered customer analytics platform solves several critical marketing operations challenges, including:

  • Acquiring a new customer is five to 25 times more expensive than retaining an existing one; yet only 16 percent of companies put their primary marketing focus on growing existing customers.
  • Without automated tools for data curation and analytics, marketers must rely on IT and can’t know or interact with their customers on a meaningful, personal basis.
  • Omni-channel customer data moves too quickly, perishes easily and is too varied for today’s resource-intensive big data approaches relying on armies of data scientists, whose manual work can curate or consume at best 10 to 15 percent of customer data, in weeks or months of work.
  • Marketers are forced to throw big money at new customer acquisition: $68 billion in digital ad spends and $9 billion in direct mail ad spends per year, including to their existing customers.

Machine learning writes songs that elicits emotions from its listeners (ScienceDaily)

Numao and his team of scientists wanted to enhance the interactive experience by feeding to the machine the user’s emotional state. Users listened to music while wearing wireless headphones that contained brain wave sensors. These sensors detected EEG readings, which the robot used to make music.

Insurers to implement machine learning to drive profitability (Yahoo! Finance)

Willis Towers Watson (WLTW), a leading global advisory, broking and solutions company, has released an updated version of its Radar pricing software. Radar 3.0 features new analytical techniques in response to increasing demands from property & casualty (P&C) insurers to embed more sophisticated pricing approaches, as competition continues to ramp up globally.
Building on the last major release of Radar in 2016, Radar 3.0 now implements a wider range of machine learning models, enabling a greater number of analytical techniques to deliver pricing approaches that are more effective. Processing speed is also up to five times faster, so companies can refine and test pricing approaches more rapidly.

New machine learning capabilities — Machine learning models now in Radar include gradient boosting machines, random forests and elastic nets.

Jetson TX2 embedded artificial intelligence computer introduced by NVIDIA (VisionSystems)

Jetson TX2 features the 256-core NVIDIA Pascal GPU, dual 64-bit NVIDIA Denver 2, Quad ARM A57 CPU, and delivers 4K x 2K video at 60 fps. The computer features 12 CSI lanes supporting up to 6 cameras, with 2.5 Gbytes/second per plane. For memory, the Jetson TX2 feature 8GB LPDDR4; 58.3 gigabytes/second, and for storage, 32GB eMMC. Additionally, the Jetson TX2 has 1 GB Ethernet connectivity, as well as USB 2.0 and USB 3.0 ports, with Linux for Tegra for operating system support.

NVIDIA’s new product is an open platform that is accessible to anyone for developing AI solutions at the edge, and it also enables Cisco to add AI features such as facial and speech recognition to its Cisco Spark products.

  • TensorRT 1.0, a high-performance neural network inference engine for production deployment of deep learning applications
  • cuDNN 5.1, a GPU-accelerated library of primitives for deep neural networks
  • VisionWorks 1.6, a software development package for computer vision and image processing
  • The latest graphics drivers and APIs, including OpenGL 4.5, OpenGL ES 3.2, EGL 1.4 and Vulkan 1.0
  • CUDA 8, which turns the GPU into a general-purpose massively parallel processor, giving developers access to tremendous performance and power-efficiency
Computer Vision/Machine Vision

ArcelorMittal engineer helps steel mills see problems, solutions (NWI Times)

A researcher at ArcelorMittal Global R&D in East Chicago has helped the company develop a machine vision system and deploy it in the steelmaking process.

Machine vision and surface inspection systems automatically scan steel products coming off the assembly line for any defects, so any issues can be rectified right away.

Quantum computer learns to ‘see’ trees (Science Mag)

The team used a D-Wave 2X computer, an advanced model from the Burnaby, Canada–based company that created the world’s first quantum computer in 2007. Conventional computers can already use sophisticated algorithms to recognize patterns in images, but it takes lots of memory and processor power.

In the new study, physicist Edward Boyda of St. Mary’s College of California in Moraga and colleagues fed hundreds of NASA satellite images of California into the D-Wave 2X processor, which contains 1152 qubits. The researchers asked the computer to consider dozens of features—hue, saturation, even light reflectance—to determine whether clumps of pixels were trees as opposed to roads, buildings, or rivers. They then told the computer whether its classifications were right or wrong so that the computer could learn from its mistakes, tweaking the formula it uses to determine whether something is a tree.

Machine-vision camera requires no programming (EE Times)

The NEON-1021-M Intel Atom E3845 processor-based camera features supports fast multi-ROI image capture, 2MP 60 fps global shutter image sensors, and PWM lighting control. High-speed multi-barcode capture by multi-ROI function shortens image processing cycle time, and optimized I/O includes one additional slave GigE Vision camera connection, 4x isolated input, 4x isolated output, and VGA output for maximized integration with external devices. Rugged construction with IP67-rated housing and M12 connectors enables the NEON-1021-M to withstand the harshest industrial environments.

Graphics display controller provides new solutions for embedded computer vision applications (Electropages)

Socionext has developed the “SC1810″ series, the fourth generation version of its high-performance graphics display controllers.

In addition to its high resolution graphics capability with improved 3D image processing performance which is five times more than that of the company’s previous products, the SoC is also capable of handling six channels Full HD video inputs and three channels of Full HD display outputs, enabling variety of input and output controls.

Myntra to launch fully-automated design collection with zero human intervention (BusinessLine)

Moda Rapido, Myntra’s in-house fast fashion brand powered by Artificial Intelligence, launched eighteen months ago, will now be the only fashion brand in the country to offer a fully-automated design collection, without any human design intervention.

The computer is fed with data from the entire digital space, including fashion portals, social media and Myntra’s own customer data, which the computer scans to zero in on what customers are looking for. Using computer vision and Artificial Intelligence (machine learning) on the scanned data, the platform creates thousands of permutations and combinations and zooms in on what would sell best; creating a TechPack which has all the design dimensions and specifications for manufacturing.

How Trulia’s ‘computer vision’ is helping consumers find the perfect home (Inman)

Around the same time, Trulia began its foray into computer vision by building a proprietary image recognition system that could tag a kitchen, bathroom or bedroom and recognize smaller objects, such as granite countertops, stainless steel appliances and hardwood floors.

Since then, Trulia has been teaching its computers to be smarter, faster and more accurate, all in the name of helping their users find the place they’ll call “home.”

The computer vision system helps Trulia find the exact home or homes that fit that particular user’s criteria by looking at the listing photos and identifying the content of those photos.

Computer vision research aims to identify overweight people from social media face photos (The Stack)

The paper Face-to-BMI: Using Computer Vision to Infer Body Mass Index on Social Media outlines how the team, led by Enes Kocabey, used a Reddit-derived image dataset developed by the Visual BMI Project to teach a computer how to understand facial topology that seems to indicated above-average Body Mass Index (BMI).

The dataset (consisting of 4206 faces in relatively arbitrary angles to camera) derives from a series of ‘before’ and ‘after’ pictures of people who have undertaken weight-loss regimes, isolating just the facial elements from the resulting photos, in order to establish some idea of baselines beyond which increased BMI might be indicated in the average person.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *