- Today’s top AR news: AR turns school gym into interactive playground.
- Today’s top AI/machine learning news: Google outsources machine learning search algorithm to Seal.
- Today’s top computer vision/machine vision news: CV app interprets foreign road signs.
- AR turns school gym into interactive playground.
- New Snapchat lenses bring AR to real-world object recognition.
- Wikitude SDK 6 uses instant tracking for mixed reality.
- Siemens adopts AR in calendar app.
- Crystal Content launch AR direct mailers.
- Fashion show uses AR to captivate audience.
- Best tools for getting started in AR development.
- AR tool for situational awareness improvement of UAV operators.
- Google outsources machine learning search algorithm to Seal.
- BlueConic releases ML-powered recommendations engine.
- Machine learning method predicts metallic defects.
- How AI is changing how we assess vehicle repair.
- Machine learning your way to smarter API error responses.
- Wearable AI detects tone of conversations.
- ZeroStack uses ML to create self-driving clouds.
- ML used for soil moisture knowledge improvement through multiple satellite sources.
- Smart skis loaded with artificial intelligence.
- Augmented Reality
- New Snapchat Lenses To Bring Augmented Reality For Real-World Object Recognition (MobiPicker)
- Wikitude SDK 6 Embarks into Mixed Reality with Instant Tracking (Next Reality)
- Siemens adopts augmented reality in calendar app (IT New Africa)
- Crystal Content launch augmented reality direct mailers (iGamingBusiness)
- Fashion Show Uses Augmented Reality To Captivate Audience [Infographic] (Forbes)
- The best tools to get started with Augmented Reality development (diGit)
- Augmented Reality Turns School Gym Into An Interactive Playground (Kotaku)
- Augmented Reality Tool for the Situational Awareness Improvement of UAV Operators (MDPI)
- Artificial Intelligence/Machine Learning
- BlueConic Releases Machine Learning Powered Recommendations Engine (CMS Wire)
- Machine Learning Method Accurately Predicts Metallic Defects (Newswise)
- How AI is changing the way we assess vehicle repair (VentureBeat)
- Google uses a new category of amazing search algorithm invented by this startup (TechWorm)
- Machine Learning Your Way to Smarter API Error Responses (InfoQ)
- Scientists develop wearable AI system that detects the tone of conversations (The Star)
- ZeroStack uses machine learning to create self-driving clouds (NetworkWorld)
- OmniEarth Applies Machine Learning to Multiple Satellite Sources for Improved Soil Moisture Knowledge (BusinessWire)
- Forget lessons, these smart skis are loaded with artificial intelligence (Mashable)
- Computer Vision/Machine Vision
- Cameras Can Speed Cities to Improving Pedestrian Safety (Next City)
- New computer vision app helps travelers interpret foreign road signs on the fly (Digital Trends)
- A vision for 3D precision: this robot arm ‘prints’ giant structures using AI (Wired)
Snapchat’s Lenses feature will soon receive a complete overhaul. Snap Inc., the parent company of Snapchat, is reportedly working on expanding its Lenses feature with augmented reality support for enabling real-world object recognition.
The advanced Lenses, which are smarter than the existing Snapchat library’s Lenses, will be able to interact with real-world objects, identify environmental elements and overlay augmented reality animations onto scenes.
As for advertisers, the new feature’s abilities to recognize something as specific as a brand name will open new revenue streams and will help build better rapport with users.
Wikitude SDK 6, the latest incarnation of the long-running AR development tools by Wikitude GmbH, brings with it “Instant Tracking,” a form of simultaneous localization and mapping (more commonly known as SLAM), which brings it one step closer to mixed reality.
SLAM allows developers to create and place 3D objects in the word without the need for visual markers to guide the device in tracking it. Instead, it uses sensors on the device to map the area and sense the movements of the device while mixing in GPS and Wi-Fi for location information.
Siemens adopts augmented reality in calendar app (IT New Africa)
Siemens has adopted augmented reality technology into a mobile application allowing users to experience augmented reality from their smart phones and tablets. The Siemens CalendAR application will be updated monthly to coincide with the corporate communications roadmap for 2017.
Once the app has been downloaded, users can point their smart device at each calendar month’s image which comes to life via 3D animated wire frames. There is also video and scrollable infographic styled content linked to each month for the users to engage with.
Crystal Content launch augmented reality direct mailers (iGamingBusiness)
Instead of regular direct mailers for their acquisition and retention campaigns, they are now adding augmented reality as an additional feature to boost conversions.
Fashion Show Uses Augmented Reality To Captivate Audience [Infographic] (Forbes)
There are multiple types of augmented reality, most notably marker-based and GPS-based. This show used a marker-based system, in which a device has to use a camera to recognize some kind of physical marker in the real world. In this case, two 12 foot banners on the side of the runway were used. Once the app recognizes the markers it overlays a digital image or video onto those markers.
Through their devices, the audience could see the digital effects added to the physical production. Colorful cubes shifting across the stage and flying pallets gave additional information about the models and clothes. These 3D elements were synchronized to follow each model down the runway.
One of the most complete SDKs for the augmented reality platform, Vuforia is perhaps one of the most feature-rich AR development tools out there targeted at mobile devices. Like most AR tech, it uses computer vision to comprehend the scene in front of the device camera and identify surfaces.
The problem with Vuforia is the absence of a complete framework manual. The disconnected short tips and instructions are not sufficient for beginners to find their way around it.
Aurasma studio is a web-based augmented reality development tool that requires no installations or programming to be implemented. In fact, you can implement an ‘Aura’, which is what they call one interactable experience, within 45 seconds.
Somewhere in between those two is ARToolkit, which has the added advantage of being open source. This means that you get free access to the development library. Among other features, ARToolkit supports 2D recognition in augmented reality and mapping additional elements via OpenGL.
While Blippar is not mainly a development platform, it does offer one of the easiest ways for a brand to use AR. We’ve found that the Blippar app itself has one the best image recognition features amongst the available apps, and that, combined with an AR experience that you can build yourself makes for a really strong contender.
The Interactive Gym, the latest project from the Canadian tech company SAGA, takes the traditional idea of gym class and adds a layer of augmented reality, turning the walls into giant games of skee-ball.
Images of floating shapes are projected onto the wall while the kids launch rubber balls at them trying to shatter the images.
Common GCSs provide information in separate screens: one presents the video stream while the other displays information about the mission plan and information coming from other sensors. To avoid the burden of fusing information displayed in the two screens, an Augmented Reality (AR) tool is proposed in this paper. The AR system has two functionalities for Medium-Altitude Long-Endurance (MALE) UAVs: route orientation and target identification. Route orientation allows the operator to identify the upcoming waypoints and the path that the UAV is going to follow. Target identification allows a fast target localization, even in the presence of occlusions. The AR tool is implemented following the North Atlantic Treaty Organization (NATO) standards so that it can be used in different GCSs.
Artificial Intelligence/Machine Learning
BlueConic added a new feature to its Customer Data Platform (CDP) which leverages machine learning and proprietary algorithms to produce content and product recommendations for individual prospects.
For the first time, researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab) have built and trained machine learning algorithms to predict defect behavior in certain intermetallic compounds with high accuracy.
Traditionally, researchers have used a computational quantum mechanical method known as density functional calculations to predict what kinds of defects can be formed in a given structure and how they affect the material’s properties. Although effective, this approach is very computationally expensive to execute for point defects limiting the scope of such investigations.
To overcome these computing challenges, Medasani and his colleagues developed and trained machine learning algorithms to predict point defects in intermetallic compounds, focusing on the widely observed B2 crystal structure.
Because they had a small data sample to work from, Medasani and his team used a forest approach called gradient boosting to develop their machine learning method to a high accuracy. In this approach additional machine learning models were built successively and combined with prior models to minimize the difference between the models predictions and the results from density functional calculations. The researchers repeated the process until they achieved a high level of accuracy in their predictions.
How AI is changing the way we assess vehicle repair (VentureBeat)
Using AI technology to automate a visual task, such as inspecting damage to a car, is a nearly instant way to provide insurance customers with crucial information about the extent of the damage.
After a machine vision algorithm assesses the auto damage, the customer can decide whether to get a repair done immediately or wait. For those who need or want to move forward with repair, AI could assist them through to the time they pick their car up from the auto body shop. Down the road, with the help of another machine learning algorithm, the driver could potentially receive a list of nearby repair shops that may be particularly experienced, for example, in servicing a specific type of vehicle and that have positive online reviews.
Explaining the working of Seal, CEO Ulf Zetterberg tells BI that the company store all of a company’s contracts in a centrally built place and then uses machine learning and AI to search for, understand, and even deduce the terminology that can be understood by the common masses.
Steven Cooper discusses using machine learning to understand malformed API requests to not only respond with a best fit response, but capture the user errors for future responses.
Watch the video here.
A team of researchers in the USA has developed an artificial intelligence system that can detect the tone of conversations and analyse speech. This portable application could be useful for people with anxiety disorders or Asperger’s.
A new AI system emulates the human ability to identify emotions and the tone of speech to establish whether conversations are sad, happy or neutral.
The scientists tested the system on participants wearing a Samsung Simband wearable.
This week, ZeroStack announced the first-ever private cloud stack managed by an artificial intelligence (AI) “learning engine,” delivering a true self-driving environment. ZeroStack’s solution involves on-premises hardware and software that is managed by a cloud-based, self-service portal. This “cloud-managed” platform has enabled the company to collect, monitor, analyze and model over 1 million objects over the past 18 months. ZeroStack has taken this experience and all that data and used it to build its own AI known as Z-Brain.
The solution will continue to collect telemetry data and leverage machine learning to provide new insights that can help customers have a better-running private cloud.
The Z-Brain provides self-driving capabilities in the following areas:
- Capacity planning. The AI does three types of capacity planning: infrastructure utilization, project-based capacity planning and an infrastructure advisor to help with future needs.
- Zero touch upgrades. No intervention from the IT organization is needed because the Z-Brain handles the upgrades of all the software modules in the ZeroStack solution.
- Efficiency optimization. Virtual machines can be “auto-sized” to specific workloads, preventing organizations from using more resources than necessary.
OmniEarth Applies Machine Learning to Multiple Satellite Sources for Improved Soil Moisture Knowledge (BusinessWire)
Under the DARPA contract, OmniEarth aims to develop a precision soil moisture mapping model by fusing information from a number of sources, notably commercially-available synthetic aperture radar (SAR) and multispectral imagery, to enhance the military’s ability to assess, manage and predict soil conditions for tactical decision making. OmniEarth has applied similar techniques in its commercial water product by fusing information from multiple satellite and aerial sources with GIS, parcel data, and customer-specific information to create water budgets to the parcel level with ≥ 95% accuracy.
The ski manufacturer teamed up with sports tech company PIQ to put an AI-powered computer — complete with an LED display — right on a pair of skis.
The ski manufacturer teamed up with sports tech company PIQ to put an AI-powered computer — complete with an LED display — right on a pair of skis.
Computer Vision/Machine Vision
Sayed thinks there’s a better way. For the last 10 years, he’s been developing a system that uses video cameras to monitor intersections for near misses between moving objects, and computers to automatically track the results.
The system, called, somewhat inelegantly, “computer vision and automated safety analysis,” uses off-the-shelf cameras, or cameras that are already installed in an area, to film a given intersection. Computer algorithms can track anything that moves through the intersection — cars, bikes, people — and can figure out quite a bit about each one. The computer knows whether the moving blip is a person or a car, how fast they’re going, how close they got to hitting another road user. The computer can even tell, with about 80 percent accuracy, whether a person is distracted by their phone while walking.
“Mapillary is a collaborative street-level imagery platform powered by computer vision,” CEO and co-founder told Digital Trends. “The mission is to understand the world’s places through images and make this understanding available to all. Mapillary provides apps and tools for everyone, enabling individuals, businesses, and governments worldwide to contribute with street-level imagery. All images are connected in 3D and objects recognized in images are automatically labeled and turned into geospatial data. Mapillary has a viewer, APIs, and developer tools for easy use of this imagery and geospatial data in a wide range of applications.”
Created by London firm Ai Build and exhibited at the GPU Technology Conference in Amsterdam, the five-metre-high structure was constructed by a robot-armed printer guided by AI-powered computer vision.
The combination of robotic brawn and artificial eyes let the printer produce this intricate pattern without sacrificing speed. Its 48 pieces were printed in just over two weeks, rather than the months it would have taken a typical 3D printer.