Daily News Digest Featured News

Drones you fly with your mind, taking census with machine learning, using computer vision to inspect fabric

Augmented Reality

This Startup Wants to Bring Augmented Reality to the iPhone Before Apple Does (Variety)

Apple may have up to 1000 engineers working on augmented reality, if estimates from UBS analyst Steven Milunovich are correct. San Francisco-based computer vision startup Occipital is comparably scrappy. But the company nonetheless believes it can beat Apple to market, and is now gearing up to sell an augmented reality (AR) headset for Apple’s iPhone next month.

Try Some Augmented Reality Learning Toys For Your Kids With SwapBots (GeekDad)

It seems like every day we’re seeing more and more evidence that Augmented Reality (AR) learning solutions are the next step in education at all levels, allowing for an amazing and useful interaction of programmable tools, information, and the real world. Our sponsor today, SwapBots, is leading this charge into AR learning and play with their new Kickstarter project that allows kids and parents to build amazing creatures with physical objects, and then bring them to virtual life via the camera and screen on their tablet or phone.

Augmented reality biofeedback could help reduce abnormalities in gait (Healio)

Augmented reality biofeedback could be an effective tool to reduce abnormalities in gait and could be used to provide telerehabilitation services to patients who may not have access to traditional clinics, according to a presenter at the American Academy of Orthotists and Prosthetists Annual Meeting and Scientific Symposium.

Akins and other researchers hypothesized that real-time biofeedback could provide instantaneous information on balance and gait adjustments, as well as improvements. Smart glasses, which project a screen onto the lenses, would be an ideal technology to display and provide instant biofeedback, he said.

Augmented education isn’t just for children (CIO)

That concept of augmented education is what Alfred Espidol is developing with Hello Alice. Although he looks impossibly young himself, Spidol is the CEO of Launchable, an Florida-based company that has already developed several successful augmented reality apps.

The concept is indeed very simple, in that you have a visual “anchor” which is printed on a page (it doesn’t even need to be a book, it could be, for example, an exercise sheet a teacher or parent prepares) which, when you point a device such as a phone or tablet at it, triggers dynamic content that interacts with the real-world background in a dynamic way. In this case, we have the “Alice” character popping up to bring the words on the page to life with her actions — so that instead of just reading the word “jump” we actually see Alice jumping, for example.


Users can visualize mine designs or stopes and interact and manipulate virtual data to evaluate improvements. Additionally, it will help users to collaborate with other professionals’ offsite, who can see and interact with the same holographic images, all without the need of a computer screen or keyboard.

VAR could also help Geologists and Engineers confirm and field test their models and plans through holographic spatial mapping. They would be able to visualize mine design, a bench in a pit or a block model to more accurately define ore tonnage and grade. Rather than simply viewing a 2D or 3D representation on a monitor or paper, VAR users can interact with and manipulate holographic images.


Peekosa’s first product, PeekPerks, is designed and developed specifically for retail in an effort to help retailers and brands find more engaging content for consumers.

Acer’s take on an augmented reality headset will ship in March (Tribble Agency)

Microsoft is keen to use the phrase mixed reality, rather than describing the headset as AR or VR but, in the original Windows 10 headset announcements previous year, the VR label was used.

The company revealed its Acer Windows Mixed Reality developer edition headset at GDC 2017 Wednesday.


Astral AR was co-founded in 2015 by Leah LaSalla and Josè La Placa Amigò, and has since seen fast-growing interest in its immersive augmented reality, biometrics-piloted unmanned aerial vehicles that provide real-time 3-D feedback. The drones—all with female names like Medusa, Jenny and Clementine—can handle tasks too dangerous to humans; for example, search and rescue, firefighting, prospecting and mining, and even taking a bullet.

What we also offer in search and rescue in wide-area capacity is finding missing persons, finding survivors from airplane crashes in ocean waters, very, very quickly—like, as fast as in an hour. So, you’ve got 100 square miles of ocean and 20 drones and each one carries a life-saving payload of a lightweight life raft that can accommodate up to six people, along with drinking water and a little bit of food. The drones are flying in a fleet, so when one finds a survivor, the rest of the fleet comes around.

We’re able to have desktop units in a control center situation, quite a distance from the actual scene, where people can collaborate and share the information. The AR-level of piloting is the only one of its kind; we’re the only ones who have the patent on augmented reality immersive telepresence-shared reality piloting.

If you’ve ever needed to be in two places at once, our drones are probably of interest for you. Imagine reaching your hand out and you imagine the drone’s robotic arm grabbing the lightbulb on top of a skyscraper and squeak squeak squeak [mimics screwing a lightbulb]. You don’t have to change the lightbulb on top of that skyscraper yourself; the drone is doing it for you.

Explore Ancient Roman Archeology Sites Via Augmented Reality (RealClear Life)

With the new augmented reality technology, visitors to Emperor Nero’s palace, Domus Aurea, can now see how it would’ve appeared in its original form. Today, the labyrinthian ruins are a subterranean maze of archeological wonder, but 2,000 years ago, Domus Aurea was a sprawling, light-filled villa.

Artificial Intelligence/Machine Learning

Google is acquiring data science community Kaggle (TechCrunch)

Sources tell us that Google is acquiring Kaggle, a platform that hosts data science and machine learning competitions.

With Kaggle, Google is buying one of the largest and most active communities for data scientists — and with that, it will get increased mindshare in this community, too (though it already has plenty of that thanks to Tensorflow and other projects).

Machine Learning Is Bringing the Cosmos Into Focus (The Atlantic)

The idea is to train a neural network so that it can look at a blurry image of space, then accurately reconstruct features of distant galaxies that a telescope could not resolve on its own.

The researchers say they were able to train a neural network to identify galaxies, then, based on what it had learned, sharpen a blurry image of a galaxy into a focused view. They used machine-learning technique known as “generative adversarial network,” which involves having two neural nets compete against one another.

A machine-learning census of America’s cities (The Economist)

First, the researchers trained their machine-learning model to recognise the make, model and year of many different types of cars. To do that they used a labelled data set, downloaded from automotive websites like Edmunds and Cars.com. Once the algorithm had learned to identify cars, it was turned loose on 50m images from 200 cities around America, all collected by Google’s Streetview vehicles, which provide imagery for the firm’s mapping applications. Streetview has photographed most of the public streets in America, and in among them the researchers spotted 22m different cars—around 8% of the number on America’s roads.

The computer classified those cars into one of 2,657 categories it had learned from studying the Edmunds and Cars.com data. The researchers then took data from the traditional census, and split them in half. One half was fed to the machine-learning algorithm, so it could hunt for correlations between the cars it saw on the roads in those neighbourhoods and such things as income levels, race and voting intentions. Once that was done, the algorithm was tested on the other half of the census data, to see if these correlations held true for neighbourhoods it had never seen before. They did. The sorts of cars you see in an area, in other words, turn out to be a reliable proxy for all sorts of other things, from education levels to political leanings. Seeing more sedans than pickup trucks, for instance, strongly suggests that a neighbourhood tends to vote for the Democrats.

The system has limitations: unlike a census, it generates predictions, not facts, and the more fine-grained those predictions are the less certain they become.

Facebook Turns to Artificial Intelligence to Tackle Suicides (Medscape)

Facebook plans to use artificial intelligence and update its tools and services to help prevent suicides among its users.

The world’s largest social media network said it plans to integrate its existing suicide prevention tools for Facebook posts into its live-streaming feature, Facebook Live, and its Messenger service.

Understanding the Brain With the Help of Artificial Intelligence (Neuroscience News)

Hence, over the past few years, they have developed and improved staining and microscopy methods which can be used to transform brain tissue samples into high-resolution, three-dimensional electron microscope images. Their latest microscope, which is being used by the Department as a prototype, scans the surface of a sample with 91 electron beams in parallel before exposing the next sample level. Compared to the previous model, this increases the data acquisition rate by a factor of over 50. As a result an entire mouse brain could be mapped in just a few years rather than decades.

However the Max Planck scientists led by Jörgen Kornfeld have now overcome this obstacle with the help of artificial neural networks. These algorithms can learn from examples and experience and make generalizations based on this knowledge.


Euler Hermes, the world’s leading trade credit insurer, today announced a pioneering partnership between its Digital Agency and Flowcast, a fintech company focused on revolutionizing trade and supply chain finance with artificial intelligence (AI). The announcement was made as the partners attended LendIt USA, a major lending and fintech conference in New York this week.

Flowcast will use its strength in analyzing transaction data to significantly evolve the concept by developing smart algorithms to create the foundation of an innovative underwriting solution within Single Invoice Cover. Benefits include improved working capital and financing along the supply chain.

Based on invoice-level data, Flowcast has developed smart algorithms that predict a range of risk parameters, such as the probabilities of default or expected timing of invoice payment – critical factors for business.

Computer Vision/Machine Vision

Cut from a different cloth (IMV Europe)

Each variety of laminated fabric was trained on Shelton Vision’s inspection system. The company developed a way of automating the training of its vision software, so that the first time the system sees a new product it realises that the fabric is not in its database and will initiate automated training.

The effective throughput speed for manual inspection varies between 5 to 12 metres per minute, according to Shelton, whereas a vision system operating at the end of a final process like heat setting might run at 50 to 60 metres per minute. Shelton Vision can provide a purpose-built high-speed roll-to-roll inspection machine capable of operating well above 100 metres per minute for textiles, and double that for simpler products.

Socionext Develops World’s First Graphics Display Controller with OpenVX Compliant Hardware Accelerator (Design & Reuse)

Socionext Inc., a leader in state-of-the art system-on-chip technology, has developed the “SC1810” series, the fourth generation version of its high-performance graphics display controllers. In addition to further strengthening the graphics functions for in-vehicle display system, which the company has proven achievements, Socionext incorporated the world’s first hardware accelerator that conforms with Khronos Group’s computer vision API OpenVXTM to the SC1810.

ITS Podcast Episode 39: Computer Vision, Inside-Out Perception (ITSP-CICEI)

Her research interest is in areas of Computer Vision, Artificial Intelligence, and Robotics. She mainly focuses on Bayesian methods for Artificial Perception in robotics and consumer applications. She has developed new efficient algorithms for autonomous vehicles, mobile manipulation, and tactile object localization.

Download the podcast here.


Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *