Daily News Digest Featured News

A hologram-powered super-fast 3D printing device, how Facebook’s AI tracks you all the time, CV startup makes mobile virtual mirror for eyewear retailers

Augmented Reality

Suggestic Is an Augmented Reality App Designed to Keep You on Your Diet (Tech.co)

Suggestic is a free, fully-automated nutrition coach, powered by artificial intelligence, that helps users stick to their diet. It offers local restaurant menu item recommendations, personalizes grocery lists, and helps with meal suggestions, recipes, and snack options. With this real-world assistance, this app will transform the way people stay healthy on a daily basis.

Perhaps the most interesting and helpful aspect of this app’s technology is the augmented reality feature. In the click of a button, you can have a thoroughly laid out plan geared towards helping you decide what to eat at a local restaurant. No more guessing, no more worrying, no more bothering restaurant employees about what kind of oil they use in their french fries.

This app will highlight the items you can eat in green, while dissuading you from trying something that’s not in line with your diet. And with 500,000 establishments in their database, you’ll have a hard time finding a restaurant where this app doesn’t work.

>

OxSight uses augmented reality to aid the visually impaired (TechCrunch)

The company built and is testing augmented reality glasses to help the visually impaired recognize and navigate objects in their environment. Think of it as a hearing aid for the blind.

Most of the people who have tested the OxSight previously had some level of sight that has degraded over time. The product uses the sight they still have, whether it’s detection of light, movement or a small amount of shape, and amplifies it inside a pair of augmented reality glasses.

OxSight smart glasses rely on technologies like see-through displays, camera systems and computer vision techniques that have been developed for augmented reality to understand the environment.

For people with minimal vision, the software can project a cardboard cutout of what a person looks like. Users who are blind but still have limited vision can customize their experience by boosting colors or zooming in or out. Because each person who is considered blind is affected differently, OxSight built a product that can be adjusted and customized to allow each unique user to understand where they are and what is around them.

Phantom AR Seeks to Democratize Augmented Reality (Next Reality)

A development team in Silicon Valley is nearing early access release of a new hardware-independent augmented reality platform called Phantom AR.

Familiarity ends up being an advantage that Phantom brings to developers. Phantom will allow developers to build applications as they normally would and deploy them within Phantom rather than rewriting them from scratch for other platforms. Phantom will also enable non-programmers to create their own augmented reality objects and share them with other users—even if they are on a different hardware platforms.

Instead of gestures, Phantom relies on hardware-based input, which allows for precision in commands.

Augmented reality start-up Daqri to develop hologram powered super-fast 3D printing device (Technowize)

Daqri, a California based augmented reality start-up has created a 3D printer powered by Hologram. The 3D printer can print solid objects faster, exhibiting a new kind of head-up display.

The super-fast 3D printer uses a laser to cure a light activated monomer into a solid plastic. The technique is to scan the whole object at once using constant coherent radiation or hologram to create the perfect shape. The whole scanning of the desired object makes the process rapid and precise. The 3D printer flashes a green laser which shines up into a petrified dish full of goo. A paper clip then emerges which seems ghostly at first but turns solid eventually. In seconds, the clip is clean and is ready for use.

The heart of the device is a holographic chip developed by Daqri.

Artificial Intelligence/Machine Learning

ExtraHop intros Addy, machine learning as a service (SearchNetworking)

ExtraHop Networks is adding artificial intelligence and machine learning to its IT operations analytics platform to give IT managers additional insight into how their networks are performing.

The machine-learning-as-a-service offering not only reports on network anomalies, but its artificial intelligence component learns to tell the difference between normal network operations and behavior that could signify a potential problem or malware attack. The core algorithm is engineered to reduce the number of false positives to allow users to focus on actual performance issues.

IBM Launches Machine Learning for z System Mainframes
(eWeek)

IBM on Feb. 15 launched a new product that should fit in nicely with its Watson artificial intelligence service inside a mainframe-based private cloud environment: IBM Machine Learning.

The company describes this as the “first cognitive platform for continuously creating, training and deploying a high volume of analytic models in the private cloud at the source of vast corporate data stores.”

To do this, IBM has extracted the core machine-learning intellectual property from IBM Watson and will initially make it available specifically for z System mainframes.

Machine learning helps researchers design less costly optical sensors (UCLA)

The prototype device is lightweight and portable, consisting of a 3D printed plastic housing, four light-emitting diodes, or LEDs, of different colors and a camera. As described in the study, a machine learning algorithm selects the four most optimal LEDs out of thousands of other possible choices, coming up with the most accurate design, and a computational method to quantify the sensor output. This work aims to provide a design tool that other engineers and researchers can use to optimize their own low-cost optical sensor readers for various applications in health care as well as environmental monitoring.

By using newly discovered nanofabrication methods, the research team was able to produce flexible plasmonic sensors that are robust and inexpensive enough to be disposable. These sensors can undergo “surface modification,” which ensures that only the molecules of interest interact with the amplified electric field.

5 Python libraries to lighten your machine learning load (InfoWorld)

A simple package with a powerful premise, PyWren lets you run Python-based scientific computing workloads as multiple instances of AWS Lambda functions.

One downside is that lambda functions can’t run for more than 300 seconds max. But if you need a job that takes only a few minutes to complete and need to run it thousands of times across a data set, PyWren may be a good option to parallelize that work in the cloud at a scale unavailable on user hardware.

Tfdeploy is a partial answer to that question. It exports a trained TensorFlow model to “a simple NumPy-based callable,” meaning the model can be used in Python with Tfdeploy and the the NumPy math-and-stats library as the only dependencies.

Now the bad news: Tfdeploy doesn’t support GPU acceleration, if only because NumPy doesn’t do that.

With Luigi, a developer can take several different unrelated data processing tasks — “a Hive query, a Hadoop job in Java, a Spark job in Scala, dumping a table from a database” — and create a workflow that runs them, end to end.

Kubelib provides a set of Pythonic interfaces to Kubernetes, originally to aid with Jenkins scripting. But it can be used without Jenkins as well, and it can do everything exposed through the kubectl CLI or the Kubernetes API.

PyTorch doesn’t only port Torch to Python, but adds many other conveniences, such as GPU acceleration and a library that allows multiprocessing to be done with shared memory (for partitioning jobs across multiple cores). Best of all, it can provide GPU-powered replacements for some of the unaccelerated functions in NumPy.

Data Selfie: This Free And Open Source Tool Shows How Facebook’s AI Tracks You All The Time (Fossbytes)

To answer these question, a free and open source tool named Data Selfie is here. It’s basically a Chrome extension that tracks you while you’re using Facebook and shows you your own data traces. It further shows how Facebook’s machine learning algorithms use your data to know more about you.

To be precise, Data Selfie records your clicks in the newsfeed, clicks on external links in the newsfeed, time spend on posts and the particular information of those posts, things you type and post, and the overall time devoted to the blue social network.

Predicting food preferences with sparklyr (machine learning) (Shirin’s playgRound)

This week I want to show how to run machine learning applications on a Spark cluster. I am using the sparklyr package, which provides a handy interface to access Apache Spark functionalities via R.

The question I want to address with machine learning is whether the preference for a country’s cuisine can be predicted based on preferences of other countries’ cuisines.

Before I start with the analysis, I am preparing my custom ggplot2 theme and load the packages tidyr (for gathering data for plotting), dplyr (for data manipulation) and ggrepel (for non-overlapping text labels in plots).

This dataset is part of the fivethirtyeight package and provides scores for how each person rated their preference of the dishes from several countries. The following categories could be chosen:

5: I love this country’s traditional cuisine. I think it’s one of the best in the world.
4: I like this country’s traditional cuisine. I think it’s considerably above average.
3: I’m OK with this county’s traditional cuisine. I think it’s about average.
2: I dislike this country’s traditional cuisine. I think it’s considerably below average.
1: I hate this country’s traditional cuisine. I think it’s one of the worst in the world.
N/A: I’m unfamiliar with this country’s traditional cuisine.
library(fivethirtyeight)

food_world_cup[food_world_cup == “N/A”] <- NA food_world_cup[, 9:48][is.na(food_world_cup[, 9:48])] <- 0 food_world_cup$gender <- as.factor(food_world_cup$gender) food_world_cup$location <- as.factor(food_world_cup$location)Before I do any machine learning, however, I want to get to know the data. First, I am calculating the percentages for each preference category and plot them with a pie chart that is facetted by country.sparklyr

As we can see by plotting the distribution of transformed values, they are far from being normal distributions. Moreover, we still see big differences between well known and not well known countries – this means that the data is unbalanced.

Read the whole tutorial.

Artificial intelligence grows a nose (Science Mag)

22 teams of computer scientists have unveiled a set of algorithms able to predict the odor of different molecules based on their chemical structure.

Artificial intelligence helps schedule service appointments (Automotive News)

The virtual assistant, created by the software provider Conversica, conducts email conversations with dealership service customers.

In a typical exchange, a Lee Kia customer gets an email that greets him or her by name and continues: “My name is Mandy Monroe and I wanted to make sure you got in touch with the Service Department about the first service on your Sportage. Would you like to schedule a service appointment?”

If the response is positive, the program forwards the lead to a service adviser or directs the customer to a scheduling tool on the dealership’s website.

The Caavo streaming box is built on game-changing machine vision for TV (The Verge)

The Caavo TV box was announced yesterday in an live demo with The Verge’s own Lauren Goode and Walt Mossberg at Code Media — it’s not exactly a streaming box, but rather a universal control system for every other streaming box you might have. You can plug in an Apple TV, Roku, Amazon Fire TV stick, and your cable box, and then simply ask for content. The Caavo will figure out what device has that content, and then play it on your TV.

Essentially, Caavo has built an AI that simulates a human user to control any device you might attach to a TV, through whatever method the system can use, whether it’s IR, HDMI-CEC, or direct control over an API on your home network.

Computer Vision/Machine Vision

This computer vision startup makes a mobile virtual mirror for eyewear retailers (technode)

With the Dynamic SDK of TopGlasses, the camera gathers real-time head images of users and puts virtual glasses on their face. Users can adjust their head poses as they want and observe the results from different angles.

People with myopia who can’t see the real-time images clearly without their glasses can use TopGlasses’ prerecord SDK. By scanning user’s face as they turning their head right and left horizontally, it takes the SDK just a few seconds to create a realistic model of your face.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *