Daily News Digest Featured News

AR used for skyscraper advertising, glasses help blind people read & recognize faces, in-cabin tech prevents distracted driving

gamalon bps
Augmented Reality

DOOH start-up brings augmented reality ads to skyscrapers (Mediatel)

It sounds like science fiction, but augmented reality adverts the size of skyscrapers could be on the digital out-of-home horizon.

New start-up Lightvert has created the technology that has the capacity to show giant adverts up to 200m high – and only in the individual viewer’s eye.

By placing a narrow strip of reflective material to the side of a building while a high-speed light scanner projects light off a reflector towards the viewer, ECHO can create large-scale images that are “captured for a brief moment in the viewer’s eye through a ‘persistence of vision’ effect.”

Images can be saved on mobile devices and shared on social media.

Sony files patent for augmented reality contact lens (Leisure Management)

Sony has filed a patent for a new kind of augmented reality technology – a simple set of contact lenses with the ability to project or record live video.

Switched on and off by a user’s blink, with the contacts able to detect if they were deliberate or not, image capture technology and data storage would be contained within the lens itself.

Morphi Releases Augmented Reality Viewer for Their 3D Modeling Apps (3DPrint.com)

Morphi is an easy-to-use 3D modeling and 3D printing app for the iPad and Mac. Morphi comes in two flavors, there’s the Morphia app, which is free with in-app purchases, and Morphi Edu, its sister app which allows for iPad in-app volume discount purchases.

It enables users to place and edit their 3D models in their real world environment using the iPad’s camera.

Augmented reality taking shape for Japanese builders (Asian Review)

Obayashi is developing technology for camera-equipped tablets that will superimpose 3-D images of a completed or in-progress project over pictures of a job site. Users preload ground plans for a project, then choose an angle to view the scene from.

The augmented views would let workers retrofitting buildings with seismic reinforcement see how trusses fit between columns, for example. They would also make it easier to show where construction walkways and temporary fencing would go.

Shimizu is refining a way to easily show the locations of such underground structures as sewage and gas pipes via tablet. Once supplied with maps of the underground objects, the method uses satellite technology to pinpoint where the objects are.

Builders still generally rely on paper maps to locate underground objects. Shimizu can already overlay images shot via tablet camera with the locations of underground structures. Within the year, it aims to render the objects in a more realistic 3-D display.

Artificial Intelligence/Machine Learning

How machine learning is changing crime-solving tactics (PHYS.org)

Two researchers from the Forensics and National Security Sciences Institute (FNSSI) have turned to computer technology to assist complicated profile interpretation, specifically when it comes to samples containing DNA from multiple people.

Additionally, the duo’s method, dubbed Probabilistic Assessment for Contributor Estimate (PACE), is patent pending.

After training their algorithms on massive amounts of data from the New York City Office of the Chief Medical Examiner and the Onondaga County Center for Forensics Sciences, PACE’s prediction powers were put to the test identifying the number of people included in mixed samples with known numbers of contributors—and it passed with flying colors.

Artificial intelligence OrCam glasses help blind people read and recognise loved ones (Mirror)

Dubbed a “hearing aid for the eyes”, the OrCam device sits on the side of a pair of spectacles and uses optical character reading (OCR) technology to read printed materials, such as newspapers or magazines, road signs, products in shops and it even recognises the faces of loved ones.

The device contains a tiny battery-operated camera and computer which automatically zooms in to find where the text is on a page.

The user then presses a button to trigger the reading, or indicates what they want to read by pointing a finger at the text.

The smart camera then photographs it, stores it and, using artificial vision software, reads it back out to the user via an earpiece.

And the cutting-edge technology can also be programmed to identify faces of friends and family, or favourite products for the weekly shop.

Anomaly Detection for Time Series Data with Deep Learning
(InfoQ)

To build a neural network, you need a basic understanding of the training process and how the net’s output is generated.

The net’s input nodes receive a numeric array, perhaps a multidimensional array called a tensor, that represents the input data.

Initially, the coefficients are random; that is, the network is created knowing nothing about the structure of the data. The activation function of each node determines the output of that node given an input or set of inputs. So the node fires or does not, depending on whether the strength of the stimulus it receives, the product of the input and the coefficient, surpasses the threshold of activation.

weighted inputs combine hidden node

In a so-called dense or fully connected layer, the output of each node is passed to all nodes of the subsequent layer. This continues through all hidden dense layers to end with the output layer, where a decision about the input is reached. At the output layer, the net’s decision about the input is evaluated against the expected decision (e.g., do the pixels in this image represent a cat or a dog?). The error is calculated by comparing the net’s guess to the true answer contained in a test set, and using that error, the coefficients of the network are updated in order to change how the net assigns importance to different pixels in the image. The goal is to decrease the error between generated and expected outputs—to correctly label a dog as a dog.

15 Open Source Artificial Intelligence Tools (PCQuest)

Caffe is a deep learning framework made with expression, speed, and modularity in mind.

Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs.

H2O was written from scratch in Java and seamlessly integrates with the most popular open source products like Apache Hadoop and Spark to give customers the flexibility to solve their most challenging data problems.

MLlib fits into Spark’s APIs and interoperates with NumPy in Python and R libraries (as of Spark 1.5). You can use any Hadoop data source (e.g. HDFS, HBase, or local files), making it easy to plug into Hadoop workflows. It includes a host of machine learning algorithms for classification, regression, decision trees, recommendation, clustering, topic modeling, feature transformations, model evaluation, ML pipeline construction, ML persistence, survival analysis, frequent itemset and sequential pattern mining, distributed linear algebra and statistics.

OpenNN is an open source class library written in C++ programming language which implements neural networks, the main area of machine learning research. The main advantage of OpenNN is its high performance.

OpenNN

The OpenCyc Platform is your gateway to the full power of Cyc, the world’s largest and most complete general knowledge base and commonsense reasoning engine.

Apache PredictionIO (incubating) is an open source Machine Learning Server built on top of state-of-the-art open source stack for developers and data scientists create predictive engines for any machine learning task. It helps users create predictive engines with machine learning capabilities that can be used to deploy Web services that respond to dynamic queries in real time.

Startup Unveils Machine Learning Products Based on Novel Approach to AI (top500.org)

Gamalon Inc, emerged from stealth mode this week, announced two machine learning products, based on an in-house technology known as Bayesian Program Synthesis (BPS). The company claims BPS can perform machine learning tasks 100 times faster than conventional deep learning techniques, while providing more accurate results.

Unlike deep learning, which often needs millions of data examples to train a neural network, a Bayesian model can be built with much fewer examples. The examples have to be of better quality, but the process cuts down on computation time and human effort dramatically. So instead of collecting a million tagged data items and commandeering a GPU-accelerated cluster for a few days, a model can be built on a laptop in just minutes using a handful of examples.

One example Gamalon offers is for company inventory data, which is often stored in different formats and uses different nomenclature for identifying the same thing (i.e., CA=Calif=California, 1teaspoon=5ml=5cc, and so on). The Structure product can normalize all the data into a master inventory and do it very quickly. According to Gamalon, their pre-alpha customers have used the software to “accomplish in minutes with twice the accuracy what previously took large teams of people months or even years.”

gamalon bps

Using machine learning to detect epilepsy in children (Huffington Post)

Firstly, the subtle abnormalities in the brain were identified by a pediatric neuroradiologist, before these were transformed into a range of features, including the thickness and folding of the brain, that could be used to train the algorithm.

When the algorithm was put through its paces, it was able to correctly identify the brain abnormality in 73% of patients.

Centrify is adding machine learning to their identity platform to help spot stolen credentials (Brian Maddden)

The new offering, called Centrify Analytics Service, will use machine learning to evaluate the activity of individual users, and then watch for behavior that could indicate potentially compromised or stolen credentials, such as logging in from a new location or from a new device. The Analytics Service will create a risk score for each event; scores can be high, medium, or low, or expressed as a numerical value, and can be fine-tuned.

Centrify told me that it takes about two weeks for the machine learning engine to learn what normal user behavior looks like, and then it can be used for production policies. Subsequently, it will continue to update user behavior baselines over time.

Computer Vision/Machine Vision

New Video Systems Now Able to ‘See’ Vehicles, Lanes, To Better Track Driver Behavior (Transport Topics)

The video safety systems in truck cabs are becoming smarter. Rather than simply recording video of events on the road, some of the latest systems have gained the ability to “see” the truck’s environment and draw conclusions about driver behavior.

Read more at: http://ttnews.com/articles/basetemplate.aspx?storyid=44855&t=Cab-Cameras-Gain-%E2%80%98Machine-Vision-

© Transport Topics, American Trucking Associations Inc.
Reproduction, redistribution, display or rebroadcast by any means without written permission is prohibited.

The event recorder, which has an outside facing lens and an inside facing lens, is equipped with sensors that make it possible for the software to see the environment ahead, Kerker said.

Amazon Rekognition AI Tech Uses Machine Vision To Guess Your Age (HotHardware)

With his baby photo seen above, Rekognition was able to identify that there is a face in the picture, with 99.7% assurance, and that the face is male, with a much less confident 62.2%. It further determined that the boy was between 0 – 2 years of age, and that he was smiling – both spot-on results.

In some cases, Rekognition could prove to be a better guesser than a real human, while in others, the advantage could flip-flop.

3D vision enables random robotic bin picking (Vision-Systems)

A single ScanningRuler 3D camera is mounted to a motion system above the two bins. The ScanningRuler 3D camera is essentially a factory-calibrated 3D sensor that combines a 756 x 512 pixel CMOS sensor camera and a built-in, servo-driven, class 2M, 660 nm, red laser scanner that sweeps through the volume of view (VOV) to create a 3D point cloud of the parts in the bin.

Connected to the 3D sensor, a PC running the PLB software analyzes and compares the information contained in the point cloud with the CAD model of the part to determine the next best pick and provide its coordinates (x, y, z, yaw, pitch, roll) to the robot controller, which does all the motion planning.

The robot then picks the part and places it in the desired location to enter the first heat treat process. At the same time, a Tolomatic (Hamel, MN, USA; www. tolomatic.com) slide and servo driven belt drive from Allen Bradley (Milwaukee, WI, USA; http://ab.rockwellautomation.com) index the 3D sensor to the other bin for scanning. By alternating between two bins, MWES engineers were able to minimize the impact of the 4 to 4.5 second 3D sensor scan time, and meet the customer’s average 14 second per pick cycle time requirement.

If the PLB system is unable to determine a best pick scenario, the robot is programmed to shuffle parts around in the bin with its magnetic gripper assembly from Magswitch Technology (Lafayette, CO, USA; www.magswitch.com.au/) before the next 3D image is acquired.

VISION SYSTEM INSPECTS X-RAY DOSIMETER BADGES – HELMHOLTZ-ZENTRUM (MVR)

Previously these 120,000 film badges were evaluated manually. To speed this inspection and increase reliability, the Helmholtz-Zentrum has developed a machine-vision system to automatically inspect these films. The film from each dosimeter badge is first mounted on a plastic adhesive foil, which is wound into a coil. This coil is then mounted on the vision system so that each film element can be inspected automatically (see figure). To analyze each film, a DX4 285 FireWire camera from Kappa optronics (Gleichen, Germany) is mounted on a bellows stage above the film reel.

Data from this camera is then transferred to a PC and processed using HALCON 9.0 from MVTec Software (Munich, Germany). Resulting high-dynamic-range images are then displayed using an ATIFire GL V3600 graphic board from AMD (Sunnyvale, CA, USA) on a FlexScan MX 190 S display from Eizo (Ishikawa, Japan). Before the optical density of the film is measured, its presence and orientation must be determined. As each film moves under the camera system’s field of view, this presence and orientation task is computed using HALCON’s shape-based matching algorithm.

eyeSight Technologies Announces Groundbreaking In-Cabin Sensing Technology to Prevent Distracted Driving (Marketwired)

eyeSight Technologies, a leader in Human Machine Interface (HMI) and user awareness embedded computer vision, today announces its automotive solution focusing on in-cabin sensing technology. With the driver as the main focal point, eyeSight offers a complete solution that provides driver awareness, driver recognition and gesture control.

Driver awareness is enabled by a new level of learning and fine-tuned computer vision to ensure that once drowsiness or distraction is detected, eyeSight’s solution will inform car systems to provide an alert or take proactive action through the safety systems, such as increase distance from the car ahead with adaptive cruise control.

Merely sitting in the driver’s seat will prompt the car to adjust to the specific driver’s preferences such as seat position, temperature, volume level, music selection, favorite stations, and more.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *