- Today’s top AR/VR news: Making Augmented Reality app with Blender, Unity and Vuforia
- Today’s top AI/machine learning news: 6 machine learning misunderstandings
- Today’s top computer vision/machine vision news: Machines can recognize something after one sighting
- How to use Blender, Unity, Vuforia to make AR.
- VR allows you to feel rain.
- The Future Group raises money to blend real-world with virtual.
- UVA Viz Lab acquires VR system.
- Real estate adopts new VR application.
- Startup AR app raises seed money.
- VR lets anyone walk on Mars at Museum of Flight.
- Selfie as a VR avatar?
- AR used to capture crime suspects.
- NBC creates social VR experience.
- Utah police train with VR simulator.
- Library features 3D printing and VR.
- Samsung AR app could save lives.
- 6 machine learning misunderstandings.
- NLP systems in AI.
- Why IoT needs AI.
- Machine learning quantifies gender bias in astronomy.
- AI used to curate a magazine using image recognition.
- Can machine learning detect autism?
- DARPA poised to set new standard for automated cybersecurity.
- Huawei’s new AI smartphone.
- Using machine learning for regtech.
- Meet the AI-powered business graph.
- Machines can recognize something after one sighting.
- What songs do birds like?
- Binge-watching videos teaches computers to recognize sounds.
- Lowe’s uses AR with Google Tango for customer benefit.
- A wearable that helps the blind navigate real world like a video game.
- Group launches neural network standard.
- Augmented Reality/Virtual Reality
- Making Augmented Reality app with Blender, Unity and Vuforia (Blender Nation)
- New Virtual Reality System Lets You ‘Feel’ Rain (NDTV)
- The Future Group Raises $20 Million to Blend Real-world Video With Virtual Worlds (UploadVR)
- New virtual reality system at UVA Viz Lab (Newsplex)
- Virtual reality finds new application in real estate (WWLP)
- Startup Releases Augmented Reality App, Raises $2 Million in Seed Round (NBC San Diego)
- Museum of Flight lets you walk on Mars in virtual reality for SpaceFest (GeeekWire)
- This company wants to turn your selfie into a virtual reality avatar (Press-Telegram)
- Patrol Aircraft Utilizes Augmented Reality System To Capture Suspects (KSIS Radio)
- NBC News created a social virtual reality experience, ‘Virtual Democracy Plaza’ (The Drum)
- Utah officers train for stressful situations in 300-degree virtual reality simulator (VirTa)
- Hubbard library “Makerspace” features 3D-printing, virtual reality (Vindy)
- Samsung taps Leo Burnett for AR app that could save your life (AdNews)
- Artificial Intelligence/Machine Learning
- 6 machine learning misunderstandings (NetworkWorld)
- Examples of natural language processing systems in artificial intelligence (Expert Systems)
- Why the IoT Needs Artificial Intelligence to Succeed (Global Big Data Conference)
- Machine-learning algorithm quantifies gender bias in astronomy (Nature)
- An A.I. Curated a Magazine Using Image Recognition Technology (The Creators Project)
- Can Machine Learning Be Used For The Detection Of Autism? (RealiaWire)
- DARPA’s Automated Bug Hunter Could Revolutionize Cybersecurity (Inverse)
- IntBridge LLC Launches Helium, The First Machine Learning Smart Light Hits Kickstarter (Home Toys)
- Huawei launches new smartphone featuring artificial intelligence (SAMAA)
- How machine learning RegTech can spot disguised sanction-listed names (International Business Times)
- Demandbase Introduces DemandGraph – Artificial Intelligence Powered Business Graph for B2B (Inside Big Data)
- Computer Vision/Machine Vision
- Machines Can Now Recognize Something After Seeing It Once (MIT Technology Review)
- What songs do birds like? An algorithmic installation at Swale is looking to find out (Technical.ly)
- Binge-watching videos teaches computers to recognize sounds (New Scientist)
- Lowe’s Delivers Augmented Reality: Now Available on Aisle 3 (PR Newswire)
- How a Wearable Device May Help the Blind Navigate the World Like a Video Game (Motherboard)
- Neural network standard initiatives launched by industry group (VisionSystems)
Augmented Reality/Virtual Reality
Making Augmented Reality app with Blender, Unity and Vuforia (Blender Nation)
Widhi Muttaqien is back with another in-depth look at AR development using Blender. I really like his videos – they are well produced and easy to follow.
Disney Research has developed a 360-degree virtual reality application using a unique chair to provide full body sensations that enables users to add customisable “feel effects” such as falling rain or a beating heart.
The Future Group has raised $20 million for its technology that blends real-world video with virtual worlds to immerse people inside what feels like an interactive game show.
The Oslo, Norway-based company came out of stealth last year and is now working on a major project with a team of more than 100 people. Its partners are still unnamed, but the company and its first partner plan to make an announcement sometime in 2017, Future Group cofounder Jens Petter Hoili said in an interview with GamesBeat.
New virtual reality system at UVA Viz Lab (Newsplex)
At the Visualization Laboratory, or Viz Lab, a number of high tech resources are available to help researchers, including a new virtual reality system.
“We have a Ph.D. student studying cultural competencies,” she said. “So what he wants to do with virtual reality is create an environment where a person can meet people from other cultures and learn how to interact with those people.”
Another researcher, from the School of Medicine, is using the technology to determine if it can be helpful in the advancement of physical therapy treatments.
Walking through the front doors of a building that doesn’t exist yet is being made possible through the eyes of virtual reality.
For design firm Interior Architects, it’s all part of the process now. Immersive models have replaced confusing blueprints and traditional 2-D mock ups.
GoMeta’s app is called Metaverse, and it’s built to allow anyone to interact with its augmented reality universe. Essentially, users can create Pokemon-Go style games and other experiences within the app. And you don’t have to know code to interact and build in the Metaverse. Using a builder within the app, users can create geo-specific “experiences” and drop them onto a virtual map of the real world. Just like with Pokemon Go, the user can walk up to experiences in real life, see a virtual tag on their smartphone, and tap on the tags to interact.
Shapiro said he envisions the Metaverse as a home for multitudes of user-generated scavenger hunts, interactive stories, and even augmented reality worlds directing you to things like public bathrooms. He also sees a variety of commercial and educational applications.
Because the virtual scene is rendered from imagery collected by Curiosity, the rover itself serves as the focal point of your wanderings. You can sidle right up to the SUV-sized machine and get a close look at the hardware.
Your own body isn’t shown in the VR field of view, so it’s as if you’re a ghost roaming around the Red Planet. I actually did try kicking the rover’s tires, but my unseen foot just flew through empty space.
This company wants to turn your selfie into a virtual reality avatar (Press-Telegram)
The Pasadena-based tech firm, housed in the Idealab business incubator, has developed a technology that allows users to create virtual reality avatars of themselves. And once someone’s virtual self has been created it can do “virtually” anything, from climbing a mountain or attending a rock concert to riding a gondola through the canals of Venice, Italy.
Using their Patrol helicopter’s high definition camera and Augmented Reality System (ARS), two pilots located and tracked three bank robbery suspects as they fled in a vehicle. The pilots relayed location and direction of travel information to officers on the ground, leading to the arrest of two suspects. When the third suspect escaped on foot, the pilots tracked him as he fled into an unoccupied house.
A sample video clip to show how ARS works is available with this news release on the Patrol’s website.
NBC News has recreated Rockefeller Center Plaza in virtual reality thanks to AltspaceVR. Viewers will be able to visit the plaza from all over the world. NBC News is the first broadcast network to invest in a series of live events to cover the election for an audience inside of virtual reality and it’s a platform that allows users to engage directly with talent.
NBC News has also created programming for Virtual Democracy Plaza, including debate watch parties, live Q&A discussions with political experts, political comedy events and more.
On Tuesday, Utah police officers stepped into a virtual reality simulator, giving them an enhanced training experience.
The training was put on by the Utah Attorney Generals’ Office at a facility in Murray. The simulator, VirTra system, is a 300-degrees virtual reality.
It allows an officer to see, hear and feel the experience that is put before them.
The VirTra is about a quarter-of-a-million-dollar system that trains officers on how to de-escalate a dangerous situation to the best of their ability.
The library has acquired 3-D doodling pens, a 3-D scanner and two 3-D printers. The printers generate objects using layers of biodegradable plastic in various colors. Makerspace also features technology that will allow users to explore robotics, computer programming and networking.
The library plans to host field trips for local students. The technology is a natural tie-in for a variety of educational disciplines, including computer programming, science and art. Makerspace, for example, includes an augmented reality sandbox that could be useful for students studying geography. As someone readjusts piles of sand, a sensor is able to register the changing terrain and communicate with a projector displaying a topography map. – See more at: http://www.vindy.com/news/2016/nov/07/makerspace-features–d-printing-virtual-/#sthash.IqjRIdga.dpuf
Following Optus and M&C Saatchi’s foray into high tech beach safety devices, Samsung and Leo Burnett Sydney have entered the space with an augmented reality app called Pocket Patrol.
The free Android app is being trialled in partnership with Surf Live Saving Australia for four weeks at Queensland’s Coolum Beach and Alexandra Headland.
Launched as a response to the 12,000 rescues performed each year, the app aims to educate swimmers about beach dangers, from rips and submerged rocks to shallow sandbars.
Artificial Intelligence/Machine Learning
6 machine learning misunderstandings (NetworkWorld)
As with any technology, machine learning could wreak havoc on a network if improperly implemented. Before embracing this technology, enterprises should be aware of the ways machine learning can fall flat to avoid setting back their operations and turning the c-suite away from implementing this technology.
It’s amazing what a computer will consider important that a human will immediately dismiss as trivial. This is why it’s imperative to consider as many relevant variables and potential outcomes as possible prior to deploying a machine learning algorithm.
Producing a useful model comes down to training data structure and quality.Before releasing machine learning into the enterprise, data scientists will test an algorithm model with data sets to ensure its performance.
Not every machine learning project will be so public or give users open access to manipulate data, but awareness of the environment the algorithm lives in will prevent potential blunders.
One type of algorithm that has recently been successful in practical applications is ensemble learning – a process by which multiple models combine to solve a computational intelligence problem. One example of ensemble learning is stacking simple classifiers like logistic regressions. These ensemble learning methods can improve predictive performance more than any of these classifiers individually.
As we have already (see about natural language processing systems), Natural Language Processing (NLP) is a fundamental element of artificial intelligence for communicating with intelligent systems using natural language. NLP helps computers read and respond by simulating the human ability to understand the everyday language that people use to communicate. Today, there are many examples of natural language processing systems in artificial intelligence already at work.
Another interesting application of natural language processing is Skype Translator, which offers on-the-fly translation to interpret live speech in real time across a number of languages. Skype Translator uses AI to help facilitate conversation among people who speak different languages.
Examples of natural language processing systems in artificial intelligence are also in hospitals that use natural language processing to indicate a specific diagnosis from a physician’s unstructured notes. For example, NLP software for mammographic imaging and mammogram reports support the extraction and analysis of data for clinical decisions, as a study published in Cancer affirms. The software is able to determine breast cancer risk more efficiently, decrease the need for unnecessary biopsies and facilitate faster treatment through earlierdiagnosis. According to the study, artificial intelligence reviewed 500 charts in but a few hours, saving over 500 physician hours.
Why the IoT Needs Artificial Intelligence to Succeed (Global Big Data Conference)
It is here, at the “Analyze” step, that the true value of any IoT service is determined, and this is where artificial intelligence (or, more properly, the subset of AI called “machine learning”) will provide a crucial role.
Machine learning is a form of programming that empowers a software “agent” with the ability to detect patterns in the data presented to it so it can learn from these patterns in order to adjust the ways in which it then analyzes that data. We already experience benefit from machine learning in our everyday lives when Netflix gives us a tailored movie recommendation or Spotify modifies our playlist. When machine learning is applied to the “Analyze” step, it can dramatically change what is (or is not) done at the subsequent “Act” step, which in turn dictates whether the action has high, low, or no value to the consumer.
For their study, the researchers analysed 200,000 papers in 5 journals from 1950 to 2015. First, they trained a machine-learning algorithm to accurately calculate the citations for each paper first-authored by a man using as many non-gender-related factors as possible — such as the journal, field and year in which the paper was published, where the first author was located and for how many years that author had been publishing.
Then they unleashed their algorithm on the papers with female first authors. This set of papers (from 1985 onwards) had actually received around 6% fewer citations than their male-authored counterparts. But the algorithm predicted that the papers should have got 4% more citations than did those authored by men.
An A.I. Curated a Magazine Using Image Recognition Technology (The Creators Project)
EyeEm is a photography community and marketplace of over 18 million photographers. It also publishes a magazine, also called EyeEm. For its fourth issue, Machina: A Curation of Real Photography by a Machine, the company turned to an artificial intelligence powered by computer vision, EyeEm Vision, to curate the magazine, selecting the photographs it feels are the best aesthetically and most impactful.
EyeEm Vision recognizes thousands of objects and concepts like feelings, moods, and so on. It can also recognize the potential aesthetics of a photograph. To do this, it assigns each photo an aesthetic score from 0 to 100, which Aguirre-Livingston explains more or less “predicts” the “curatorial preference of that photo”—so, whether it’s “good” or “bad.”
A new study suggests that machine learning algorithms might be used to analyze genetic data that points to an autism spectrum disorder diagnosis before symptoms become obvious.
The team has now developed a four-stage computerized neural network system for testing simplified genetic data. The system traces between 150 and 500 features present on different chromosomes and known to be associated with ASD when certain genetic patterns are present.
Buckle up, because at least one of the hypotheticals in that scenario will probably come to pass at some point in the future. Mike Walker, a computer scientist at Defense Advanced Research Projects Agency who specializes in machine learning, predicts that automated cybersecurity is the future.
Walker told a room full of computer security experts just that this week at the annual O’Reilly Security conference n New York before he presented the results of DARPA’s recent of the Grand Cyber Challenge, which he launched in 2013. The event offered proof that machines could hunt and patch bugs on their own, a revelation that could prove revolutionary.
Over the years, we have seen many smart lighting products, but they all have similar drawbacks: 1. Hard to setup 2. Require manual control on a phone 3. Low cost performance. Helium is our solution to these problems. Unlike any smart light bulbs in the market, Helium aims to create a better experience while offering features no other products had before.
Helium is three things in one: 1) A machine learning smart light bulb that learns about us and turns on the right light when we need it, 2) A night shift featured light that changes the color temperature throughout the day to keep us healthy, 3) a Li+Fi extender that combines both light bulb and a Wi-Fi extender to expand our home Wi-Fi coverage.
After a simple setup, Helium will become smarter over time by analyzing all kinds of data such as time, location, weather, proximity to the light and behavioral pattern. Through our machine learning algorithm, it predicts what you want and turns on or off the right light at the right time.
It includes a feature that enables the device to learn about the habits of its user and automatically put the most frequently used apps in easy reach – unlike, for example, Apple’s iPhone, where icons that may occupy many screens apart from the start screen have to be manually ordered.
How machine learning RegTech can spot disguised sanction-listed names (International Business Times)
A criminal whose name has been added to a sanction list compiled by the Office of Foreign Assets Control (OFAC) is highly unlikely to use that exact name when opening a bank account.
However, advancements in data science can help recognise patterns in such cases and generally improve the speed and accuracy of identifying financial crime risks. UK based ComplyAdvantage, which uses data analysis and machine learning, is at the forefront of “RegTech”.
It uses machine learning to build and maintain this live dataset and the expanding network that surrounds it. The ComplyAdvantage database is peopled with individuals on sanction lists, or who are known to Interpol or appear on various different government watch lists, or on the lists of various regulatory bodies.
Demandbase Introduces DemandGraph – Artificial Intelligence Powered Business Graph for B2B (Inside Big Data)
Demandbase, a leader in Account-Based Marketing (ABM), announced DemandGraph, the B2B marketing industry’s artificial intelligence driven business graph that captures and monitors in-depth business behavior and relationships across businesses worldwide. DemandGraph now powers Demandbase’s Account-Based Marketing platform spanning advertising, marketing, sales and analytics solutions.
Computer Vision/Machine Vision
Machines Can Now Recognize Something After Seeing It Once (MIT Technology Review)
Most of us can recognize an object after seeing it once or twice. But the algorithms that power computer vision and voice recognition need thousands of examples to become familiar with each new image or word.
Researchers at Google DeepMind now have a way around this. They made a few clever tweaks to a deep-learning algorithm that allows it to recognize objects in images and other things from a single example—something known as “one-shot learning.” The team demonstrated the trick on a large database of tagged images, as well as on handwriting and language.
The team demonstrated the capabilities of the system on a database of labeled photographs called ImageNet. The software still needs to analyze several hundred categories of images, but after that it can learn to recognize new objects—say, a dog—from just one picture. It effectively learns to recognize the characteristics in images that make them unique. The algorithm was able to recognize images of dogs with an accuracy close to that of a conventional data-hungry system after seeing just one example.
What songs do birds like? An algorithmic installation at Swale is looking to find out (Technical.ly)
The installation is called PandoraBird: Identifying the Types of Music That May Be Favored by Our Avian Co-Inhabitants, and it runs on Mondays from 10 a.m. to 1 p.m. PandoraBird consists of a bird feeder with a computer and audio system built in. It plays a selection of instrumental songs, and when a bird comes to feed, the computer system make note of which type of bird it is and how long it stays. That information will go into an algorithm that selects which music the system plays based upon the birds’ activity.
Binge-watching videos teaches computers to recognize sounds (New Scientist)
The researchers tested several versions of their SoundNet model on three data sets, asking it to sort between sounds such as rain, sneezes, ticking clocks and roosters. At its best, the computer was 92.2 per cent accurate. Humans scored 95.7 per cent on the same challenge.
Lowe’s is giving home improvement customers an entirely new way to see their homes through Lowe’s Vision, one of the first applications using Tango, a Google technology that enables computer vision software on your phone for augmented reality experiences. Using the app and the Lenovo Phab 2 Pro, the first Tango-enabled smartphone, customers can visualize virtual home furnishings, fixtures and accents in their real living rooms, kitchens and bathrooms.
By leveraging Tango technology, a set of sensors and computer vision software from Google that senses and maps surroundings, Lowe’s Vision creates a 3D depth sense allowing customers to measure spaces and visualize how products like appliances and home décor will look and fit together in a room.
An Italian technology startup called Eyra Ltd will soon release a wearable device known as the Horus that allows the visually impaired to explore public space beyond the limitations of the walking cane. “The white cane is not enough because it only solves the issue of detecting obstacles touching the ground,” said Saverio Murgia, CEO and co-founder of Eyra. “It can never detect obstacles like tree branches or cars parked near crosswalks.”
It is powered by the NVIDIA Tegra K1, a popular mobile processor (a variant will be used in Nintendo’s upcoming Switch console). It is also used for display units inside of Audi cars and electric Sedans.
Neural network standard initiatives launched by industry group (VisionSystems)
First, Khronos created a working group to create an application program interface (API) independent standard file format for exchanging deep learning data between training systems and inference engines. The aim of the Khronos Neural Network Exchange Format (NNEF) is to simplify the process of using a tool to create a network, and run that network on other toolkits or inference engines. This, according to Khronos, can reduce deployment friction and encourage a richer mix of cross-platform deep learning tools, engines, and applications. Work for generating requirements and detailed design proposals for the NNEF is underway, and companies are welcome to contact Khronos for input.
Secondly, the OpenVX group—an open, royalty-free standard for cross platform acceleration of computer vision applications—has released an extension to enable convolutional neural network (CNN) topologies to be represented as OpenVX graphs and mixed with traditional vision functions. The OpenVX Neural Network extension defines a multi-dimensional tensor object data structure which can be used to connect neural network layers, represented as OpenVX nodes, to create flexible CNN topologies. OpenVX neural network layer types, according to Khronos, include convolution, pooling, fully connected, normalization, soft-max and activation – with nine different activation functions. The extension enables neural network inferencing to be mixed with traditional vision processing operations in the same OpenVX graph.