Daily News Digest Featured News

AR turns school gym into interactive playground, Google outsources machine learning search algorithm, computer vision interprets foreign road signs

computer vision pedestrian safety

News Summary

Augmented Reality

New Snapchat Lenses To Bring Augmented Reality For Real-World Object Recognition (MobiPicker)

Snapchat’s Lenses feature will soon receive a complete overhaul. Snap Inc., the parent company of Snapchat, is reportedly working on expanding its Lenses feature with augmented reality support for enabling real-world object recognition.

The advanced Lenses, which are smarter than the existing Snapchat library’s Lenses, will be able to interact with real-world objects, identify environmental elements and overlay augmented reality animations onto scenes.

As for advertisers, the new feature’s abilities to recognize something as specific as a brand name will open new revenue streams and will help build better rapport with users.

Wikitude SDK 6 Embarks into Mixed Reality with Instant Tracking (Next Reality)

Wikitude SDK 6, the latest incarnation of the long-running AR development tools by Wikitude GmbH, brings with it “Instant Tracking,” a form of simultaneous localization and mapping (more commonly known as SLAM), which brings it one step closer to mixed reality.

SLAM allows developers to create and place 3D objects in the word without the need for visual markers to guide the device in tracking it. Instead, it uses sensors on the device to map the area and sense the movements of the device while mixing in GPS and Wi-Fi for location information.

Siemens adopts augmented reality in calendar app (IT New Africa)

Siemens has adopted augmented reality technology into a mobile application allowing users to experience augmented reality from their smart phones and tablets. The Siemens CalendAR application will be updated monthly to coincide with the corporate communications roadmap for 2017.

Once the app has been downloaded, users can point their smart device at each calendar month’s image which comes to life via 3D animated wire frames. There is also video and scrollable infographic styled content linked to each month for the users to engage with.

Crystal Content launch augmented reality direct mailers (iGamingBusiness)

Instead of regular direct mailers for their acquisition and retention campaigns, they are now adding augmented reality as an additional feature to boost conversions.

Fashion Show Uses Augmented Reality To Captivate Audience [Infographic] (Forbes)

There are multiple types of augmented reality, most notably marker-based and GPS-based. This show used a marker-based system, in which a device has to use a camera to recognize some kind of physical marker in the real world. In this case, two 12 foot banners on the side of the runway were used. Once the app recognizes the markers it overlays a digital image or video onto those markers.

Through their devices, the audience could see the digital effects added to the physical production. Colorful cubes shifting across the stage and flying pallets gave additional information about the models and clothes. These 3D elements were synchronized to follow each model down the runway.

augmented reality fashion show

The best tools to get started with Augmented Reality development (diGit)

One of the most complete SDKs for the augmented reality platform, Vuforia is perhaps one of the most feature-rich AR development tools out there targeted at mobile devices. Like most AR tech, it uses computer vision to comprehend the scene in front of the device camera and identify surfaces.

The problem with Vuforia is the absence of a complete framework manual. The disconnected short tips and instructions are not sufficient for beginners to find their way around it.

Aurasma studio is a web-based augmented reality development tool that requires no installations or programming to be implemented. In fact, you can implement an ‘Aura’, which is what they call one interactable experience, within 45 seconds.

Somewhere in between those two is ARToolkit, which has the added advantage of being open source. This means that you get free access to the development library. Among other features, ARToolkit supports 2D recognition in augmented reality and mapping additional elements via OpenGL.

While Blippar is not mainly a development platform, it does offer one of the easiest ways for a brand to use AR. We’ve found that the Blippar app itself has one the best image recognition features amongst the available apps, and that, combined with an AR experience that you can build yourself makes for a really strong contender.

Augmented Reality Turns School Gym Into An Interactive Playground (Kotaku)

The Interactive Gym, the latest project from the Canadian tech company SAGA, takes the traditional idea of gym class and adds a layer of augmented reality, turning the walls into giant games of skee-ball.

Images of floating shapes are projected onto the wall while the kids launch rubber balls at them trying to shatter the images.

Augmented Reality Tool for the Situational Awareness Improvement of UAV Operators (MDPI)

Common GCSs provide information in separate screens: one presents the video stream while the other displays information about the mission plan and information coming from other sensors. To avoid the burden of fusing information displayed in the two screens, an Augmented Reality (AR) tool is proposed in this paper. The AR system has two functionalities for Medium-Altitude Long-Endurance (MALE) UAVs: route orientation and target identification. Route orientation allows the operator to identify the upcoming waypoints and the path that the UAV is going to follow. Target identification allows a fast target localization, even in the presence of occlusions. The AR tool is implemented following the North Atlantic Treaty Organization (NATO) standards so that it can be used in different GCSs.

Artificial Intelligence/Machine Learning

BlueConic Releases Machine Learning Powered Recommendations Engine (CMS Wire)

BlueConic added a new feature to its Customer Data Platform (CDP) which leverages machine learning and proprietary algorithms to produce content and product recommendations for individual prospects.

Machine Learning Method Accurately Predicts Metallic Defects (Newswise)

For the first time, researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab) have built and trained machine learning algorithms to predict defect behavior in certain intermetallic compounds with high accuracy.

Traditionally, researchers have used a computational quantum mechanical method known as density functional calculations to predict what kinds of defects can be formed in a given structure and how they affect the material’s properties. Although effective, this approach is very computationally expensive to execute for point defects limiting the scope of such investigations.

To overcome these computing challenges, Medasani and his colleagues developed and trained machine learning algorithms to predict point defects in intermetallic compounds, focusing on the widely observed B2 crystal structure.

Because they had a small data sample to work from, Medasani and his team used a forest approach called gradient boosting to develop their machine learning method to a high accuracy. In this approach additional machine learning models were built successively and combined with prior models to minimize the difference between the models predictions and the results from density functional calculations. The researchers repeated the process until they achieved a high level of accuracy in their predictions.

How AI is changing the way we assess vehicle repair (VentureBeat)

Using AI technology to automate a visual task, such as inspecting damage to a car, is a nearly instant way to provide insurance customers with crucial information about the extent of the damage.

After a machine vision algorithm assesses the auto damage, the customer can decide whether to get a repair done immediately or wait. For those who need or want to move forward with repair, AI could assist them through to the time they pick their car up from the auto body shop. Down the road, with the help of another machine learning algorithm, the driver could potentially receive a list of nearby repair shops that may be particularly experienced, for example, in servicing a specific type of vehicle and that have positive online reviews.

Google uses a new category of amazing search algorithm invented by this startup (TechWorm)

Explaining the working of Seal, CEO Ulf Zetterberg tells BI that the company store all of a company’s contracts in a centrally built place and then uses machine learning and AI to search for, understand, and even deduce the terminology that can be understood by the common masses.

Machine Learning Your Way to Smarter API Error Responses (InfoQ)

Steven Cooper discusses using machine learning to understand malformed API requests to not only respond with a best fit response, but capture the user errors for future responses.

Watch the video here.

Scientists develop wearable AI system that detects the tone of conversations (The Star)

A team of researchers in the USA has developed an artificial intelligence system that can detect the tone of conversations and analyse speech. This portable application could be useful for people with anxiety disorders or Asperger’s.

A new AI system emulates the human ability to identify emotions and the tone of speech to establish whether conversations are sad, happy or neutral.

The scientists tested the system on participants wearing a Samsung Simband wearable.

ZeroStack uses machine learning to create self-driving clouds (NetworkWorld)

This week, ZeroStack announced the first-ever private cloud stack managed by an artificial intelligence (AI) “learning engine,” delivering a true self-driving environment. ZeroStack’s solution involves on-premises hardware and software that is managed by a cloud-based, self-service portal. This “cloud-managed” platform has enabled the company to collect, monitor, analyze and model over 1 million objects over the past 18 months. ZeroStack has taken this experience and all that data and used it to build its own AI known as Z-Brain.

The solution will continue to collect telemetry data and leverage machine learning to provide new insights that can help customers have a better-running private cloud.

The Z-Brain provides self-driving capabilities in the following areas:

  • Capacity planning. The AI does three types of capacity planning: infrastructure utilization, project-based capacity planning and an infrastructure advisor to help with future needs.
  • Zero touch upgrades. No intervention from the IT organization is needed because the Z-Brain handles the upgrades of all the software modules in the ZeroStack solution.
  • Efficiency optimization. Virtual machines can be “auto-sized” to specific workloads, preventing organizations from using more resources than necessary.

OmniEarth Applies Machine Learning to Multiple Satellite Sources for Improved Soil Moisture Knowledge (BusinessWire)

Under the DARPA contract, OmniEarth aims to develop a precision soil moisture mapping model by fusing information from a number of sources, notably commercially-available synthetic aperture radar (SAR) and multispectral imagery, to enhance the military’s ability to assess, manage and predict soil conditions for tactical decision making. OmniEarth has applied similar techniques in its commercial water product by fusing information from multiple satellite and aerial sources with GIS, parcel data, and customer-specific information to create water budgets to the parcel level with ≥ 95% accuracy.

Forget lessons, these smart skis are loaded with artificial intelligence (Mashable)

The ski manufacturer teamed up with sports tech company PIQ to put an AI-powered computer — complete with an LED display — right on a pair of skis.

The ski manufacturer teamed up with sports tech company PIQ to put an AI-powered computer — complete with an LED display — right on a pair of skis.

Computer Vision/Machine Vision

Cameras Can Speed Cities to Improving Pedestrian Safety (Next City)

Sayed thinks there’s a better way. For the last 10 years, he’s been developing a system that uses video cameras to monitor intersections for near misses between moving objects, and computers to automatically track the results.

The system, called, somewhat inelegantly, “computer vision and automated safety analysis,” uses off-the-shelf cameras, or cameras that are already installed in an area, to film a given intersection. Computer algorithms can track anything that moves through the intersection — cars, bikes, people — and can figure out quite a bit about each one. The computer knows whether the moving blip is a person or a car, how fast they’re going, how close they got to hitting another road user. The computer can even tell, with about 80 percent accuracy, whether a person is distracted by their phone while walking.

computer vision pedestrian safety

New computer vision app helps travelers interpret foreign road signs on the fly (Digital Trends)

“Mapillary is a collaborative street-level imagery platform powered by computer vision,” CEO and co-founder told Digital Trends. “The mission is to understand the world’s places through images and make this understanding available to all. Mapillary provides apps and tools for everyone, enabling individuals, businesses, and governments worldwide to contribute with street-level imagery. All images are connected in 3D and objects recognized in images are automatically labeled and turned into geospatial data. Mapillary has a viewer, APIs, and developer tools for easy use of this imagery and geospatial data in a wide range of applications.”

A vision for 3D precision: this robot arm ‘prints’ giant structures using AI (Wired)

Created by London firm Ai Build and exhibited at the GPU Technology Conference in Amsterdam, the five-metre-high structure was constructed by a robot-armed printer guided by AI-powered computer vision.

The combination of robotic brawn and artificial eyes let the printer produce this intricate pattern without sacrificing speed. Its 48 pieces were printed in just over two weeks, rather than the months it would have taken a typical 3D printer.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *