Daily News Digest Featured News

Drive a car like F-35 with AR, in search of a machine learning master algorithm, Facebook can tell what you’re wearing with object recognition

apple computer vision patent

News Summary

Augmented Reality

Pioneers set sail with cutting-edge augmented reality applications (IoT Agenda)

By digitalizing the ship’s blueprints and using a custom-developed augmented reality app built by the partners on PTC’s Vuforia AR platform, shipyard workers can now easily identify the temporary steel that needs to be removed just by walking around using a tablet. The AR app color codes the temporary structures as green while permanent steel and foundations are displayed in other colors, which allows the steel that needs to be removed to be identified quickly and easily, Lilley said.

Transform your surroundings to learn anytime, anywhere! (India Today)

For example, AR can help enhance the museum-going experience of students. By simply holding the phone over the exhibit object, they can see the overlay. The recent Google innovative project would be made available first at the Detroit Institute of Art.

Presently, 150 locations across the US have AR statues with QR code affixed. Triggered by the app, a statue would appear, which can be moved 360 degrees for detailed viewing.

VW will introduce heads-up display with augmented-reality in its upcoming electric vehicles (Electrek)

On top of the large interior, VW plans to go minimalist with the “ultimate reduction”, says Bischoff:

In the interior, VW designers are shooting for the “ultimate reduction” — eliminating console elements in favor of a tablet and a heads-up display enhanced by augmented-reality.

While it has long been present in concept vehicles, augmented reality might be ready to soon go mainstream in production cars. There are still some hurdles before the adoption of the technology, but it is expected to be facilitated by the introduction of autonomous driving technology.

Is This the Real Life? Is This Just Fantasy? AR Hits Manufacturing (IndustryWeek)

The tipping point, Ryan said, was a large project, sustained over three months in 2012, that introduced “digital storyboarding, a kind of mobile (virtual reality) solution” that cut down on the need for the massive drawings used for reference throughout the yard: “When we bring in a new generation of shipbuilders, giving them a drawing that’s three feet wide, two feet tall and a foot thick is not how they’re going to want to go to work.” Of course, the digital storyboarding also demonstrated a 35% cost reduction in the construction of one craft over those three months, which helped spark the overall shift toward digital.

NNS uses tablets for just about all its AR initiatives, most often for inspection—Ryan said one inspection process that normally lasts 36 hours had been trimmed to 90 minutes—as well as work instruction, training and the continued elimination of those oversized ship diagrams.

Is Augmented Reality the Future of Trying On Clothes? (Esquire)

The company is slated to launch an app called DressingRoom, which will use augmented reality to allow you to try on clothes without leaving your house.

To use the app—which was built in a collaboration between Google and San Francisco-based startup Avametric—all you have to do is enter information like your height and weight, and a 3D virtual model will appear in front of you to show you how the clothes will fit. If you like what you see, you can buy it right from the app.

Augmented Reality: A Future Content Marketing Staple? (Business2Community)

Just imagine how it could revolutionise the dating world. An app harnessing AR could conceivably allow users to scan a room and reveal the relationship status, age, interests and sexual orientation of those people around them that are also using the same app.

On 25 January, the Detroit Institute of Arts (DIA) introduced Lumin, a mobile AR tour that gives the museum’s visitors an opportunity to engage with displays in a unique way.

By using a mobile device – provided by the museum – guests can blend the real world with a virtual one, enhancing the overall experience by bringing exhibits to life.

Drive a Car Like You’d Fly an F-35 With Augmented Reality (Wired)

Aero Glass bases its product on the on the R-7 augmented glasses produced by San Francisco-based Osterhout Design Group. Running Android and powered by a Qualcomm Snapdragon 805 processor, they use a 1080p camera to read the user’s view, a 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer to track head movements and orientation, and dual 720p screens in the lenses to project digital overlays.

The screen-free approach resonates with the innovators working to bring AR to cars. At CES last month, auto suppliers Harman, Continental, and Visteon all unveiled AR systems that project information into the driver’s field of view, completely circumventing dash-based instruments. Harman uses digital overlays to convey speed and braking information of other vehicles on the road and insert prominent street-sign graphics into the field of view to help with navigation. Meanwhile, Continental, working with projection-technology specialist DigiLens, showed its system for projecting AR data onto the windshield itself, obviating the need for headgear. Visteon’s system demonstrated a sensor-driving head-up display that projects warnings of possible vehicular and pedestrian obstacles onto the windshield.

BMW has some bigger ideas. Last year it showed off its MINI Augmented Vision, a system that uses Osterhout glasses to pop navi directions, text messages, and x-ray vision right into the driver’s eyeballs. That last feature works by projecting camera feeds from outside the car into the glasses. This year, to mark its 100th birthday, BMW showed off its “Vision Next 100” concept car, including an AR-enhanced windshield that flags obstacles and provides data streams that completely replace the dashboard instrumentation.

Appy Pie Announces New Augmented Reality Features on its DIY App Building Platform for Small Businesses (Yahoo! Finance)

Appy Pie, the leading cloud-based mobile application builder software that allows small to medium-sized businesses to easily create mobile apps, announced today the launch of virtual reality (VR) and augmented reality (AR) features into its platform, allowing small and medium sized businesses (SMBs) to easily integrate immersive technology into their apps, further enhancing the user experience.

For the launch, Appy Pie is offering two features — image recognition and tracking, and panoramic and 360 video viewer.

  • The image recognition and tracking feature is able to recognize millions of everyday real-world objects, track their position and augment the display of the object.
  • The Panoramic and 360 Video Viewer feature allows Appy Pie users to provide their users an amazing real-world experience in virtual world. Users just need to click photos or capture videos of the location or product in panoramic or 360-degree view, and add it as a 3D texture in their app. They can add any type of videos, 3D models, images and entire HTML snippets to their app and virtually transport their app users to another world.
Artificial Intelligence/Machine Learning

Is a master algorithm the solution to our machine learning problems? (Tech Crunch)

But suppose there’s an algorithm that knows what we’re searching for on Google, what we’re buying on Amazon and what we’re listening to on Apple Music or watching on Netflix. It also knows about our recent statuses and shares on Facebook.

Machine learning has different schools of thought, and each looks at the problem from a different perspective. The symbolists focus more on philosophy, logic and psychology and view learning as inverse of deduction. The connectionists focus on physics and neurosciences and believe in the reverse engineering of the brain. The evolutionaries, as the name suggests, draw their conclusions on the basis of genetics and evolutionary biology, whereas the Bayesians focus on statistics and probabilistic inference. And the analogizers depend on extrapolating the similarity judgements by focusing more on psychology and mathematical optimization.

The connectionists
The connectionists focus on physics and neuroscience and believe in the reverse engineering of the brain. They believe in the back-propagation or “backward propagation of errors” algorithm to train the artificial neural networks to get the results.

The symbolists
The symbolists focus more on philosophy, logic and psychology and view learning as the inverse of deduction. The symbolists’ approach solves the problem using pre-existing knowledge to fill the gaps. Most of the expert systems use the symbolists’ approach to solve the problem with an If-Then approach.

The evolutionaries
The third school of thought, the evolutionaries, draw their conclusions on the basis of genetics and evolutionary biology.

The Bayesian school of thought
If you’ve been using emails for 10 to 12 years now, you know how spam filters have improved. This is all because of the Bayesian school of thought in machine learning. The Bayesians focus on the probabilistic inference and Bayes’ theorem to solve the problems. The Bayesians start with a belief that they call a prior. Then they obtain some data and update the prior on the basis of that data; the outcome is called a posterior. The posterior is then processed with more data and becomes a prior and this cycle repeats itself until we get the final answer. Most of the spam filters work on the same basis.

The analogizers
The fifth tribe of machine learning, the analogizers, depend on extrapolating the similarity judgements by focusing more on psychology and mathematical optimization. The analogizers follow the “Nearest Neighbor” principal for their research. The product recommendations on different e-commerce sites like Amazon or movie ratings on Netflix are the most common examples of the analogizers’ approach.

All of the above schools solve different problems and present different solutions. The real challenge is to design an algorithm that will solve all the different problems these approaches try to solve — that single algorithm will be the “master algorithm.”

TensorFlow 1.0 unlocks machine learning on smartphones (InfoWorld)

Version 1.0 not only brings improvements to the framework’s gallery of machine learning functions, but also eases TensorFlow development to Python and Java users and improves debugging. A new compiler that optimizes TensorFlow computations opens the door to a new class of machine learning apps that can run on smartphone-grade hardware.

Since Python’s one of the biggest platforms for building and working with machine learning applications, it’s only fitting that TensorFlow 1.0 focuses on improving Python interactions. The TensorFlow Python API has been upgraded so that the syntax and metaphors TensorFlow uses are a better match for Python’s own, offering better consistency between the two.

The bad news is those changes are guaranteed to break existing Python applications.

TensorFlow is now available in a Docker image that’s compatible with Python 3, and for all Python users, TensorFlow can now be installed by pip, Python’s native package manager.

Perhaps the single biggest addition to TensorFlow 1.0 isn’t a language support feature or new algorithms. It’s an experimental compiler for linear algebra used in TensorFlow computations, Accelerated Linear Algebra (XLA). It speeds up some of the math performed by producing machine code that can run either on CPUs or GPUs. Right now, XLA only supports Nvidia GPUs, but that’s in line with the general nature of GPU support for machine learning applications.

Skytree offers machine-learning-as-a-service in latest version (SD Times)

Skytree 16.0 brings with it a new machine-learning-as-a-service offering from the company. The on-site and the as-a-service packages seek to ease the process of creating an initial machine-learning model for use in businesses.

This version also adds transform snippets, which can be brought to bear on data. Snippets can handle the mundane data-transformation tasks needed to prepare information for use in a machine learning system or model. In version 16.0, snippets are part of a larger feature set.

Machine Learning Software Allows More Accurate Diagnosis’ for Heart Failure Patients (Trendin Tech)

Scientists from the MRC London Institute of Medical Sciences (LMS) have now developed AI machine learning software that is capable of diagnosing with an 80% accuracy when patients suffering from a serious heart condition will die.

Cyril Amarchand first Indian law firm to adopt artificial intelligence (Bar and Bench)

Kira is a software that uses artificial intelligence to identify, analyse and extract clauses and other information from contracts and other types of legal documents. It includes integrated machine learning models for a range of transaction requirements, across the firm’s practice areas. The tool can also identify different clauses across a large volume of legal contracts with a high degree of accuracy.

Machine learning platform helps IT workers find experts at their firm (SearchFinancialApplications)

Collokia Acumen uses machine learning to build profiles of employees based on their visits to technology-related internet sites, technical articles they’ve read on the web or software they’ve written. It then recommends experts to co-workers, said Pablo Brenner, CEO and co-founder of Collokia, which is based in Montevideo, Uruguay. The machine learning platform analyzes the complexity of the technology sites visited by an employee, the frequency and length of the visits, and how much interaction occurs.

If an employee is searching to learn something about big data, for example, the machine learning platform will automatically display workers who are researching the same topic and will recommend best search results, he said.

Could Artificial Intelligence Help Those With Asperger’s, Social Anxiety? Smart Band Decodes Anger, Other Emotions (Medical Daily)

A new device created at MIT using Artificial Intelligence can detect emotion, which could help people clear up confusion or untangle mixed signals, particularly in high-stakes conversations such as job interviews or salary negotiations. It could also help people, such as those with Asperger’s or social anxiety, who have trouble interpreting facial expressions or social cues.

Following the conversations, each of which were several minutes, the duo trained two algorithms. The first denoted if an overall conversation was happy or sad while the second classified the exchanges as positive, negative or netural in five-second increments.

5 Free Courses for Getting Started in Artificial Intelligence (KDnuggets)

1. Intro to AI (UC Berkeley)

This could be considered the premier, pioneering, online-oriented, open-access university level AI course in existence.

2. Artificial Intelligence: Principles and Techniques (Stanford)

Drawing from some of the most esteemed texts in the area (Russell & Norvig; Koller & Friedman; Hastie, Tibshirani & Friedman; Sutton & Barto), this collection of materials includes notes, slides, assignments, exams, and projects (including solutions).

3. Reinforcement Learning

The material is concise, consisting of both lecture videos and corresponding slides, the combination of which is packed with information.

4. Deep Reinforcement Learning (UC Berkeley)

Taught by Sergey Levine, John Schulman, and Chelsea Finn, this course is another take on reinforcement learning.

5. Deep Learning for Self-Driving Cars (MIT)

As deep learning continues its transition from research to real world tool, it’s interesting to see these deep learning specialized application courses pop up at distinguished universities all around.

deep learning self-driving cars

Deep Learning and Machine Learning Killer Tools, Libraries, and Apps (DZone)

TensorFlow has released a that includes an experimental Java API (using JNI).

Yet another Python library for neural networks, here is a cool example for Sentiment Analysis.

For analtyics, you have to try Airbnb’s Superset (formerely Caravel) with PyHive for SparkSQL. Yet another cool Python tool.

sudo yum upgrade python-setuptools
sudo yum install gcc libffi-devel python-devel python-pip python-wheel openssl-devel libsasl2-devel openldap-devel
pip install virtualenv
virtualenv venv
. ./venv/bin/activate
pip install –upgrade setuptools pip
pip install mysqlclient
pip install pyhive
pip install superset
fabmanager create-admin –app superset
2017-01-27 18:15:37,864:INFO:flask_appbuilder.security.sqla.manager:Created Permission View: menu access on Query Search
2017-01-27 18:15:37,885:INFO:flask_appbuilder.security.sqla.manager:Added Permission menu access on Query Search to role Admin
Recognized Database Authentications.
2017-01-27 18:15:37,907:INFO:flask_appbuilder.security.sqla.manager:Added user admin
Admin User admin created.
superset db upgrade
superset load_examples
superset init
superset runserver -p 8088
http://localhost:8088/

superset

Some very cool Deep Learning tutorials from Google: Learn TensorFlow and Deep Learning without a PHD and TensorFlow for MNIST Codelab.

IBM Watson’s Artificial Intelligence Will Assist H&R Block Tax Preparers (Newsfactor)

H&R Block and IBM fed the Watson computing system about 600 million tax return data points and “taught Watson the language of tax,” said Block spokesman Gene King.

The two companies said the collaboration is the first time that Watson’s artificial intelligence has been applied to tax preparation.

Computer Vision/Machine Vision

Fast, accurate test for malaria could be a game-changer (Israel21c)

Sight’s first product, the Parasight malaria detection device, completes malaria tests in less than four minutes with more than 99% accuracy for sensitivity and specificity. It also identifies the malaria species and counts the exact number of parasites per microliter. Results are automatically uploaded to the lab information system.

Parasight was designed for central labs, while a portable point-of-care version suitable for smaller facilities will soon be introduced. The new version will also be capable of performing a complete blood count (CBC) — a first for a computer-vision platform.

Apple patents detail augmented reality device with advanced object recognition, POI labeling (AppleInsider)

At its core, the filing deals more with power efficient object recognition than it does visualization of AR data overlays. The former feature is perhaps the greatest technical barrier to wide adoption of AR. In particular, existing image matching solutions like those employed in facial recognition security systems consume a large amount of power and thus see limited real world use.

apple ar patent

To achieve power specifications suitable of a mobile device, the invention maintains a default low-power scanning mode for a majority of its operating uptime. High-power modes are triggered in short bursts, for example when downloading and displaying AR content or storing new computer vision models into system memory.

Apple also built depth detection functionality into its latest flagship iPhone with Portrait Mode, a feature that uses complex computer vision algorithms and depth mapping to create a series of image layers. The camera mode automatically sharpens certain layers, specifically those containing a photo’s subject, and selectively unfocus others using a custom blur technique.

apple computer vision patent

To compensate for shortcomings in computer vision systems, Apple proposes the use of geometry models, depth-sensing, positioning data and other advanced technologies not commonly seen in current AR solutions.

Ditto Labs Partners with Teradata to Deliver AI-Based Vision-as-a-Service to Teradata Customers (Digital Journal)

Ditto Labs, Inc., an industry leader in vision-as-a service products, announced today that it has entered into a partnership with Teradata Corporation to deliver cloud-based artificial intelligence solutions.

Facebook’s image recognition can now tell what you’re wearing (Mashable)

Advancements in Facebook’s computer vision tech and the introduction of new tools will let users make much more targeted image searches. For instance, when you search your old photos, you’ll be able to look for images where you’re wearing a black shirt or red dress, or where the people in the image are dancing.

Crediting “a lot” of teams for the advancements, Candela wrote that Facebook’s general-purpose AI platform, FBLearner Flow, is now running 1.2 million AI experiments a month — six times more than it was just a year ago.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *