Daily News Digest Featured News

AR & Holography morphed together, AI learns to write its own code by stealing from other programs, vision system inspects insulin pumps

Lampix
Would Google Glass-style augmented reality work better on a smartwatch? (The Verge)

The WatchThru is a concept created by a team of researchers from University of Bremen, Google, and Hasselt University that offers a different spin on AR with an intriguing expansion to traditional smartwatches: a second transparent display for augmented reality interactions.

The research team has a couple of interesting use cases for the additional screen. For example, having the glass show a directional arrow to help navigate around a campus, while a more traditional map display is shown on the regular screen. A more complex version uses external tracking cameras and sensors to keep track of the location of the watch and the wearer’s head to display contextual icons and overlays onto real-world objects.

New Markerless Augmented Reality Technology for Retailers, Manufacturers and Homebuilders Unveiled in Marxent VisualCommerce™ Product Update (BusinessWire)

Marxent, the leader in Augmented Reality (AR) and Virtual Reality (VR) for retailers and manufacturers, today announced an update to its VisualCommerce™ 3.0 3D product platform that will enable a markerless augmented reality solution for mobile in-field and direct-to-consumer sales. Home furnishings retailers, home builders, and manufacturers can now offer customers a flexible, endless aisle shopping experience with photo-real AR technology made available from anywhere.

Designed for omnichannel retailers and manufacturers, Marxent’s VisualCommerce™ platform is a cloud-based content management system for smart 3D products that is purpose-built for use across channels. The VisualCommerce™ platform enables retailers to invest in a single 3D product library that can be then used to create 3D experiences appropriate to various points in the retail customer journey, including markerless AR, touchscreen, VR showrooms and more.

Marxent’s proprietary markerless AR technology allows users with single camera mobile devices to place a single 3D product into a real-world context without the use of a QR code or image marker. With markerless, AR is now more portable and more accessible to a broader base of users. Using the camera on a tablet or mobile device to orient the 3D object in real space, the selected object can be viewed at a distance or up close, and manipulated to fit in real-world environments.

Hyd startup morphs AR, holography to roll out cool new tool (The Week)

The CM’s virtual pitch to constituents, in colour and 3-D, was made possible by an app created by Hyderabad-based startup Avantari, which skilfully morphed Augmented Reality (AR) and holographic techniques, to create what it called Augmented Integrated Reality or AIR.

Download the Invite CM app on phone, from the Google Play store Print out a marker (which is available online here). Place place it on the floor. Focus the phone camera on the marker. The app will recognise this marker and a 3D hologram of Fadnavis will appear on it. Hear CM Fadnavis deliver a five-minute-long address. One can choose between English, Hindi and Marathi. You can also insert yourself into the hologram with the CM and snap a selfie. If that sounds complicated, there is an explanatory video on You Tube.

Artificial Intelligence/Machine Learning

Turn anything into a nightmare cat with this machine learning tool (TechCrunch)

The machine learning software uses Google’s Tensorflow to translate one image to another, using training data provided by a database of over 2,000 stock images of cats to identify edges and fill in simple line art with what it estimates would be the best approximation of a realistic cat coloring of what it wrongly assumes, because of the limitations set upon it by its cruel creator, must be a cat.

nightmare cat

Current Trends in Tools for Large-Scale Machine Learning (TheNextPlatform)

During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.

There are three types of machine learning: supervised machine learning, unsupervised machine learning, and reinforcement learning.

This Week in Hadoop and More: Machine Learning, Deep Learning, and Minimal Viable Big Data (DZone)

You need, at a minimum, a standard open-source Hadoop platform like HDP 2.5 or ODPi with HDFS, YARN, Hive2LLAP, HBase, or Phoenix as your base for massive petabyte storage.

Thus, Kafka and MQTT are required. Getting trapped in non-locally installable technology that doesn’t extend to your 1GB devices is a weakness in your environment. Basically:

Devices, logs, and distributed servers > MQTT > Cloud X > NIFI > Phoenix, Hive LLAP, and HBase > Kafka > Spark, Spark with Deep Learning packages, and Storm.

The idea is to stream everything in near real-time continuously from all sources.

Bloomberg Adds Machine-Learning To Apache Solr Search (InformationWeek)

Bloomberg has found the Solr open source search engine valuable as a core element of several Bloomberg products. Now it has come up with a Solr plug-in that lets Solr users build a machine learning model of what information is most valuable to them, then add that information through the plug-in to Solr searches.

How you can master machine learning and AI for under (engadget)

Believe it or not, you can learn the fundamentals of Machine Learning even with little to no programming background. The Complete Machine Learning Bundle features 10 in-depth courses and more than 60 hours of hands-on training to get you up to speed on the latest machine learning skills. The combined retail value is over $700, but Engadget readers can get it today for just $19.50 with code LEARN50 at checkout.

IBM Machine Learning brings Spark to the mainframe (ZDNet)

IBM’s solution then, is a hybrid approach. First, build a Linux cluster on which to run data ingestion from external sources, transforming, pipelining and serving up Jupyter notebooks. Second, add IBM Machine Learning, a mainframe-based, fit-for-purpose, federated platform dedicated to machine learning, that keeps the data in place. It uses the mainframe’s zIIP (System z Integrated Information Processor) that was designed for BI and analytics workloads on the mainframe without incurring MIPS charges.

All execution is dispatched to the mainframe, to avoid bringing data to the processing. To do this, IBM has essentially ported Apache Spark 1.6 to its Z-Series platform, including Spark MLLib, Spark SQL, Spark Streaming and GraphX. IBM will also include a curated set of machine learning libraries that it has developed, and in the future, will include other models and frameworks from the open source community, such as TensorFlow.

AI Learns to Write Its Own Code by Stealing from Other Programs ([H]ard/OCP)

Microsoft and University of Cambridge have developed a machine learning system called DeepCoder that can steal code from other programs and use it to create it’s own program. This A.I. will eventually be able to create a program from the suggestions of a person who doesn’t know how to code. DeepCoder has learned a process called program synthesis where it would piece together lines of code to create a desired result like any programmer would. Machine Learning allows DeepCoder to quickly scour databases of code and assemble code according to probability of usefulness.

DeepCoder is becoming very efficient at finding code as it will search obscure programs that many people wouldn’t have thought of. It gets faster each time that it is given a problem as it creates a database of code that worked. DeepCoder can create working programs in a fraction of a second. Right now the A.I. is limited to 5 lines of code, but using the right coding language, it can create fairly complicated programs. Others are working on similar tech like the Massachusetts Institute of Technology that has created a machine learning system that can fix broken software with working code from other programs.

MACHINE-LEARNING ALGORITHMS CAN PREDICT SUICIDE RISK MORE READILY THAN CLINICIANS, STUDY FINDS (Newsweek)

New research shows that machine-learning algorithms can dramatically improve our predictive abilities on suicides. In a new survey in the February issue of Psychological Bulletin, researchers looked at 365 studies from the past 50 years that included 3,428 different measurements of risk factors, such as genes, mental illness and abuse. After a meta-analysis, or a synthesis of the results in these published studies, they found that no single risk factor had clinical significance in predicting suicidal ideation, attempts or completion.

3 cool machine learning projects using TensorFlow and the Raspberry Pi (OpenSource.com)

Caltrain project
The Caltrain project is a Silicon Valley Data Science trainspotting project.

Because delay estimates provided by Caltrain “can be a bit … off” (their words), the team wanted to integrate new data sources for delay predictions beyond what was available from Caltrain’s API. They outline three questions they wanted an IoT Raspberry Pi train detector to answer:

  1. Is there a train passing?
  2. Which direction is it going?
  3. How fast is the train moving?

Meter Maid Monitor
Naulty says Meter Maid Monitor, which combines TensorFlow image classification with a Raspberry Pi motion detection and speed-measuring program, is an effort to avoid parking tickets.

When an image of a moving car in the field of view is captured, it gets analyzed by TensorFlow with trained data (to recognize the meter maid vehicles). If the image is a meter maid match, a message gets sent via Twilio with a link to the image.

TensorFlow on the farm
An article on the Google Cloud Platform blog, How a Japanese cucumber farmer is using deep learning and TensorFlow, explains that Koike’s mother spent up to eight hours a day sorting cucumbers by characteristics such as size, thickness, color, texture, small scratches, and whether they have prickles, which command higher prices on the market.

UCLA researchers combine 3D printing with machine learning to develop medical sensors (3ders.org)

Using a 3D printed prototype detector with a sensor that can be modified by machine learning techniques, the researchers have demonstrated a new, more efficient way to detect tiny items such as cancer biomarkers, viruses, and proteins.

The prototype makes use of machine learning in order to decide what type of light source should be used in the plasmonic sensing process, as the technique allows a particular algorithm to adapt to the data it is presented with and to “train” itself to make decisions.

The reader consists of these four differently coloured LEDs, a camera, and 3D printed plastic housing. To use the device, a specimen is applied to the sensor, which is subsequently fitted into a cartridge that is placed inside the reader to be automatically measured and analysed. 3D printing technology allows the prototype to be made cheaply but to still be durable and appropriately designed to adapt to different situations.

Machine Learning Cuts Lawyer Hours by 360,000 a Year for JPMorgan ([H]ard/OCP)

JPMorgan has developed machine learning software that saves the company 360,000 lawyer hours per year. Normally it takes a team of lawyers many hours interpreting the 12,000 commercial-loan agreements that JPMorgan processes per year. With the new Contract Intelligence software nicknamed COIN; JPMorgan is finding far less mistakes being made and it only takes seconds to process a form.

Olay is Bringing Machine Learning to Skin Care (ChipChick)

Step one is taking a selfie.

Here’s where things get advanced. The app will pore over your selfie pixel by pixel, mercilessly picking apart your face and identifying flaws — I guess it’s easier to take when it’s a machine doing the nitpicking. You’ll get breakdowns for the forehead, cheeks, mouth, crow’s feet and under-eye, with detailed information about how old each part looks and what can be done to make them look not so old. Olay is using a machine learning engine powered by Nvidia’s deep learning platform, so I don’t think they’re exaggerating about the pixel thing.

Computer Vision/Machine Vision

Lampix Simulator pre-launch (Lampix Email)

We are preparing to fully launch the Lampix Simulator. We have selected you, a sub-group of highly appreciated Lampix fans, to help us test the simulator and provide us the first feedback on the simulator.

Once you have a simulation you like please email it to us ( to support@lampix.co). We will run it on a physical Lampix. Film what it does and send you back the results. We will partner with the best app makers and promote them on the Lampix platform. We will also give the best app makers the earliest access to the physical Lampix and any other benefits we can.

This simulator allows you to build apps as if you were working with a physical Lampix. You can simulate placing objects, and fingers pushing projected buttons. You can simulate things moving. You can simulate anything you would do with a physical Lampix.

Lampix

Insulin pumps inspected with vision system (Today’s Medical Developments)

Flex (formerly Flextronics) builds an insulin pump – about 4cm x 3cm x 1cm – that attaches to the patient’s skin. A program running on the patient’s personal digital assistant (PDA) controls the pump and keeps track of when an insulin injection is needed. When the PDA gives the command, a small needle pops out of the device and injects insulin into the patient. The device typically remains attached to the patient for about three days and while attached, the patient can shower or swim. Inside the device is a complex mechanism that safely inserts the needle into the patient’s skin. Six different areas of the pump interior need close-tolerance inspection to ensure safe operation.

Instead, the company used six Cognex In-Sight vision systems that Flex engineers programmed using a graphical interface to select pre-built algorithms.

10 uses of computer vision in marketing & customer experience (Econsultancy)

1. Smarter online merchandising
However, AI-based software such as Sentient Aware is now allowing for visual product discovery, which negates the need for most metadata and surfaces similar products based on their visual affinity. This means that as an alternative to using a standard filtering system, shoppers can select a product they like and be shown visually similar products.

2. More effective retargeting

3. Real-world product & content discovery
Pinterest has only recently launched a tool called Lens, which functions like Shazam but for the visual world. This gives the consumer the ability to point their smartphone camera at an object and perform a Pinterest search, to surface that particular product or related content.

4. Image-aware social listening

5. Frictionless store experiences
Computer vision technology then tracks the customer around the store (presumably alongside some form of phone tracking, though Amazon has not released full details) and sensors on the shelves detect when the customer selects an item.

6. Retail analytics
Density is a startup that anonymously tracks the movement of people as they move around work spaces, using a small piece of hardware that can track movement through doorways.

7. Emotional analytics

8. Image search

9. Augmented reality

10. Direct mail processing
Royal Mail in the UK spent £150m on an automated facility in 2004, near Heathrow, which scans the front and back of envelopes and translate addresses into machine-readable code.

Capturing the Third Dimension for Machine Vision (Quality Magazine)

3D imaging brings more detailed analysis and insight to manufacturing and quality inspection processes where depth information can help verify proper assembly and detect surface defects. Where 2D inspection provides X and Y values for an object, depth information can be used to verify the proper position and height of components on a circuit board or automate pass/fail decisions to screen out flawed products.

Time-of-flight systems are an efficient method to calculate depth data and distances that measure the time required for light to travel from the imaging device to the object. Time-of-flight relies on two different methods to scan objects. Continuous wave systems measure phase length for a light source, and support only low resolution scanning. The other approach, which measures distance based on precise light pulses, is better suited for higher resolution requirements.

In both systems, the light source hits the inspected object and is reflected back to a camera sensor.

A more advanced structured light system is now in development that combines digital fringe projection and patented software techniques to match the cost benefits of digital systems while rivaling the performance of analog approaches.

This fringe projection system uses a low-cost digital micromirror device (DMD) projector, high-speed machine vision camera, and patented software techniques that simulate analog sine waves. This results in systems that are far less expensive than an analog approach, while delivering much higher accuracy than digital systems.

A conventional structured light system can generate about 4 million 3D pixels per second with an “easy” matte object. In comparison, the advanced digital fringe projection system generates approximately 12 million 3D pixels per second under the same conditions, allowing for faster inspection and increased throughput. The system can find defects that are as small as 30 micrometers (µm) wide and 15 µm deep—nearly impossible to see with the human eye.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *