Daily News Digest Featured News

YouCam live streaming makeup app, machine learning used to combat human trafficking, Nvidia’s computer brains for automated cars

google machine learning

News Summary

Augmented Reality

Snaphash is an augmented reality weed doctor for your iPhone (The Next Web)

When opened, you’re met with a wall of images, like a sort of cannabis social network — think Instagram, but for pot. The wall shows off your plants, as well as those of your friends and allows you to like and comment much like you would on any other social network.

When paired with the camera, though, the wall of weed images takes on a different approach. Dubbed “Bud Rx,” there’s an in-app platform for sharing photos of problematic plants and trying to diagnose what caused the issue. After uploading the photo, you’ll be met with the option to tag images and share with your community in an attempt to fix what ails your sick plants.

You can also scan potential grow locations and a augmented reality-ish feature analyzes the site to give you the optimal location and setup to match the space.

This self-driving electric car might as well be your living room on wheels (Mashable)

An AI assistant called NOMI controls the car and gauges how you feel (tired, etc.) with the help of sensors. It also learns passenger interests and displays customized information on its huge transparent dashboard display.

In addition to providing panoramic views, the entire glass canopy also doubles as a surface for displaying augmented reality experiences. It could, for example, overlay a movie to entertain, or constellation information as the car drives under the stars, or useful travel info (i.e. names of upcoming mountains or buildings), or even a video chat.

Augmented Reality Is Already Improving Worker Performance (Harvard Business Review)

Upskilling technologies, a partnership between humans and smart machines, can augment workers’ abilities, resulting in dramatically improved performance, greater safety, and higher worker satisfaction. One of the best examples of this type of partnership is the industrial use of augmented reality (AR) smart glasses in manufacturing, warehousing, and field service environments that overlay computer-generated video, graphic, or text information onto physical objects — for example, step-by-step repair instructions hovering over a machine part, visually guiding a worker through the job. As we’ll show, wearable AR devices are now being used in manufacturing and industrial settings and can boost workers’ productivity on an array of tasks the first time they’re used, even without prior training. These technologies increase productivity by making workers more skilled and efficient, and thus have the potential to yield both more economic growth and better jobs.

YouCam adds live streaming feature to augmented reality makeup app (Mobile Commerce Daily)

The new feature in the YouCam app allows customers to live stream their applications of augmented reality makeup for an audience. From there, the products used can be purchased through the app.

Now consumers can live stream themselves applying the virtual makeup to show their friends either in an instructional capacity or to get feedback on which makeup works best.

Meta 2 is the Augmented Reality of Tomorrow, Today (Tom’s Guide)

Set to debut sometime this year, the device is available for pre-order for $949. It’s aggressively priced to get the hardware in more developer hands, at least more than the Meta 2’s primary competitor: Microsoft Hololens. Priced at wallet-emptying $3,000, Hololens is attempting to cement its position as the de facto augmented reality device, by way of simply being Microsoft. But through a series of demos, I found that the Meta 2 has Hololens beat on a number of fronts.

But the biggest difference between the two systems is the quality of the images rendered in augmented reality. Boasting a wide 90-degree field of view (compared to Hololens’ 40 degrees), during my brain dem, it looked like I was staring up at a galaxy of twinkling stars, which were in actuality electricity-firing neurons. Thanks to its 2560 x 1440 resolution, I could clearly see the glowing veins and folds that make up the organ. In another demo, I could see strands of the presenter’s hair.

Meta 2 has an operating environment that runs on top of Windows, and it takes a mere 3-5 seconds for the headset to scan a room for a proper AR overlay. The myriad of sensors working in tandem can then create 3D holograms in real time, with which you can interact.

The Washington Post preps its augmented reality push (Digiday)

The Post first used AR last year to explain the events that led up to Freddie Gray’s arrest death in Baltimore in 2015. But people had to download an app to access the experience. Since then, the Post has been building an AR framework into its two existing apps — its magazine-style “Rainbow” app and its more traditional, newspaper-style app — to take friction out of the process.

FHWA exploring augmented reality for road construction (ConstructionDIVE)

With the incorporation of AR into highway quality assurance, the nation’s roadways are quickly becoming decidedly smart, highlighted by a 35-mile stretch of Interstate 33 in Ohio that is being used as a test bed for IoT-sensors, weather resistant materials science and autonomous vehicle proof-of-concept projects.

Artificial Intelligence/Machine Learning

Machine Learning: A New Weapon In The War Against Forced Labor And Human Trafficking (Fast Company)

Machine learning and similar statistical tools can identify suppliers of goods and services that are more likely to involve forced labor, whether they’re electronics manufacturers in developing countries or escort services in the United States. In the U.S., where sex work is frequently advertised online, leaving a digital trail, these techniques can also help guide law enforcement to sex trafficking gangs and their victims.

The company’s principal product is called Traffic Jam and is offered to law enforcement officials. It uses machine learning to analyze those sex trade postings through text and image analysis, with the goal of finding ads that share similarities with examples that had been previously flagged in trafficking cases.

Authorities can also use Traffic Jam to follow ads tied to a clue like a name or a phone number, potentially finding sets of victims or advertisements linked to the same traffickers. Other organizations offer their own tools to law enforcement to address similar facets of the problem: Thorn, a California nonprofit founded by actors Demi Moore and Ashton Kutcher to combat child sex trafficking, offers a machine learning-powered tool that highlights escort ads that may be fronts for trafficking. The software is now used by about 4,000 officers across the U.S., according to Thorn CEO Julia Cordua.

Applying Machine Learning to the Internet of Things (IoT)

A great example is Google’s application of machine learning to its data centers last year. Data centers need to remain cool, so they require vast amounts of energy for their cooling systems to function properly (or you could just dunk them in the ocean). This represents a significant cost to Google, so the goal was to increase efficiency with machine learning.
With 120 variables affecting the cooling system (i.e. fans, pumps speeds, windows, etc.), building a model with classic approaches would be a huge undertaking. Instead, Google applied machine learning and cut it’s overall energy consumption by 15%. That represents hundreds of millions of dollars in savings for Google in the coming years.

google machine learning

When these hauling vehicles break down, it costs Goldcorp $2 million per day in lost productivity. Goldcorp is now using machine learning to predict with over 90% accuracy when machines will need maintenance, meaning huge cost savings.

The Nest Thermostat is a great example, it uses machine learning to learn your preferences for heating and cooling, making sure that the house is the right temperature when you get home from work or when you wake up in the morning.

You Could Be Diagnosed by Artificial Intelligence (Lifezette)

When you’re sick and need to know what’s wrong, an app on your phone may soon be all it takes to get an accurate diagnosis.

That’s what London-based Babylon Health, a private company working in artificial intelligence for the medical field, hopes will become a reality. The group is working with the accident and emergency (A&E) services system to test-market the app in parts of London.

Analyze video evidence faster with artificial intelligence (Police One)

Veritone, an artificial intelligence technology company, has built an open platform to provide law enforcement with AI applications called cognitive engines that can process unstructured data from multiple sources to help police extract actionable intelligence. These cognitive engines include applications for audio transcription, facial recognition and more.

Veritone helps law enforcement agencies make sense of overwhelming amounts of unstructured data in three ways:

1. The platform is omnivorous, taking in audio and video from public and private sources from CCTV security video to social media clips to body-worn or dashboard camera video. These disparate data sources are integrated into an indexed data set that can be searched and layered for multi-dimensional correlation.

2. A variety of cognitive engines can be used to extract specific information such as words, faces, license plates, geolocation, time of day, etc. The system automates analysis to find patterns that provide useful information to investigators. The engines currently available perform with comparable accuracy to processing by a human, and they deliver results much faster. Performance continues to improve as the technology matures.

3. The Veritone Platform exists on the Microsoft Azure Government cloud for secure online access and mobility so that data becomes a dynamic tool for comparison and analysis rather than siloed on individual servers.

Xero introduces machine learning automation system (Accounting Today)

Accounting software provider Xero has announced a machine learning automation system that uses detailed statistical analysis to learn how a business categorizes its invoices and then automatically sorts the firm’s billing statements.

Machine Learning and Antidepressant Response (Psychology Today)

This inventive “machine learning” approach is being used to identify patterns related to better response to treatment across many branches of medicine including psychiatry. Machine learning finds patterns to predict treatment response in the data itself, rather than relying on preconceptions of researchers or clinicians about which symptoms are most important or how they are interrelated. The technique searches within the data set often testing the connections using an “N minus 1” method, subtracting one subject on each analysis, and other times taking part of the data set, say, one half, and comparing the pattern observed in that part versus the other. Then these patterns can be tested on data from other studies, to see if they are still predictive.

Researchers in this recent study found 3 major clusters of symptoms (what they call core emotional; sleep (insomnia); and ‘atypical symptoms’). In general, they found that antidepressants worked better for the emotional symptoms than for the other 2 clusters of symptoms.

Noodle.ai Unveils BEAST Enterprise Artificial Intelligence Supercomputing Technology (Yahoo! Finance)

Noodle Analytics, Inc. (Noodle.ai), the Enterprise Artificial Intelligence company, today announced the general availability of the Noodle.ai BEAST, an AI platform powered by the NVIDIA DGX-1 AI supercomputer.

The GPU architecture of the DGX-1 decreases the amount of time that learning algorithms need to be trained properly, allowing Noodle.ai—and its clients—to assimilate even more data into time-sensitive predictions.

Dreamstime Integrates Artificial Intelligence and Machine Learning into Stock Photography Service (Yahoo! Finance)

Dreamstime, the largest community in stock photography, announced today the company has implemented a proprietary artificial intelligence (AI) system that uses sophisticated algorithms to screen submitted images. The machine learning tool is designed to examine how human editors at Dreamstime review images, and then adjust its parameters to best match the editors’ various criteria.

The AI tool from Dreamstime will go beyond current image recognition software, as it can unite metadata with the context of the image itself. It can be applied to multiple areas of the submission process, for example notifying users they need a model release or recognizing a copyright logo resides in an image. The tool has the capability to detect features in submitted images that trigger rejections, and uses this same intelligence to label and localize each photo.
This is a next-generation enhancement that can not only recognize the subject of a photo, but also relate its compositional and commercial value. Dreamstime will continue to utilize its team of human editors who understand the nuances of reviewing in terms of the legal, aesthetic, and technical aspects of an image. This team’s ongoing work will be integrated within the AI system, allowing it to adopt some of the human element of the review process.
Both photographers/contributors and end users will benefit from the AI solution. Photographers will receive a streamlined review period for images, so they can have more content accepted and more quickly earn revenue for their work. Dreamstime estimates it will reduce the review period with 30-60% and in some cases, the review will be instant. End buyers/designers will gain from a wider selection of image content and a refined search capability that accounts for the AI-produced data.

This Company Uses Machine Learning to Find Owners of Recalled Cars (Entrepreneur)

Recall Masters, which employs 20 people and even a lobbyist in Washington, D.C., collects data from more than 50 different sources, then utilizes machine learning to analyze it. The startup can then determine if a vehicle qualifies for a recall and who its current owner is — even if it has been resold multiple times — by poring over billions of transactions, according to Miller. He dubs the process “digital forensics.”

Machine learning newbs: TensorFlow too hard? Kick its ass with Keras (The Register)

Keras, a popular deep learning library, has been updated with a new API to make it easier for developers to use machine learning in Python.

The update, dubbed Keras 2, has been changed to adapt to TensorFlow API better, allowing developers to mix and match TensorFlow and Keras components together. Since the software runs on TensorFlow and Theano, there is no performance cost to using Keras compared to the other more complex frameworks.

Conversica to Showcase How Artificial Intelligence Is Used to Convert More Leads Into Customers (WeAreWVProud.com)

Conversica’s AI sales assistant automatically reaches out to qualify every single lead 24×7, verifying contact information, collecting purchase requirements and even helping set appointments, with no human intervention required. With an average 35 percent email response rate – far greater than that of traditional marketing outreach – Conversica’s sales assistant significantly increases the number of quality conversations with real buyers and reduces the time salespeople spend chasing fruitless leads.

Computer Vision/Machine Vision

Altia Systems to Demonstrate the First Smartphone-Scale PanaCast Micro Camera Technology (Broadway World)

In this system, the PanaCast Computer Vision Engine (PCVE) software – which integrates the company’s real-time video stitching IP – runs on the Snapdragon 820 processor by Qualcomm Technologies, Inc. and drives the newly-developed PanaCast micro camera module to deliver a 5.5 Megapixel video stream with a ~130° field of view, with 3568 x 1536 pixels per frame, operating at 30 frames per second. The micro camera module consists of two standard 3 Megapixel smartphone cameras operated synchronously, with standard fields of view for each camera.

Xilinx Expands into Wide Range of Vision-Guided Machine Learning Applications with reVISION (PR Newswire)

At Embedded World, Xilinx, Inc. (NASDAQ: XLNX) today announced expansion into a wide range of vision guided machine learning applications with the Xilinx® reVISION™ stack. This announcement complements the recent Reconfigurable Acceleration Stack, significantly broadening the deployment of machine learning applications with Xilinx technology from the edge to the cloud. The new reVISION stack enables a much broader set of software and systems engineers, with little or no hardware design expertise to develop intelligent vision guided systems easier and faster. These engineers can now realize significant advantages when combining machine learning, computer vision, sensor fusion, and connectivity.

reVISION enables the fastest path to the most responsive vision systems, with up to 6x better images/second/watt in machine learning inference, 40x better frames/second/watt of computer vision processing, and 1/5th the latency over competing embedded GPUs and typical SoCs. Developers with limited hardware expertise can use a C/C++/OpenCL development flow with industry-standard frameworks and libraries like Caffe and OpenCV to develop embedded vision applications on a single Zynq® SoC or MPSoC.

Leveraging the unique advantages of reconfigurability and any-to-any connectivity, developers can use the stack to rapidly develop and deploy upgrades. Reconfigurability is critical to ‘future proof’ intelligent vision-based systems as neural networks, algorithms, sensor technologies and interface standards continue to evolve at an accelerated pace.

A new imaging method for food inspection (Control Engineering Europe)

Hyperspectral imaging is a novel imaging approach that allows us to see things that traditional machine vision cannot show – namely the chemical composition of organic materials. This leads to a host of applications within the food industry.

Applications include: the detection of fat and bones on chicken; distinguishing between sugar/salt and citric acid; identifying bruising in oranges, apple sorting, coffee bean inspection, turkey and pork differentiation, raisin sorting, pistachio shell identification, pet food inspection (differentiating stones/clay from dog food pellets) and many more.

This Machine Can Replace An Entire Photo Studio (psfk)

To work, the machine relies on artificial intelligence. The high-end Canon EOS 1DX MkII, the strobes, and the moving camera mount all work together to create the perfect photos according to preset standards. Controls and replay are through a mounted iPad Pro.

First, the necessary specifications are required for the photos. Then all the systems are adjusted to fulfill what is needed. The result is a camera roll that applies the same photo rules again and again. With machine vision, factors like the model’s height versus photo height, light levels and color correction are all perfected. There is little else the operator should do.

Nvidia And Bosch Teaming Up To Make Computer Brains For Automated Cars (Forbes)

Nvidia and Bosch are teaming up to make an AI-enabled computer that can be mass-produced to serve as the brains for driverless vehicles.

The Silicon Valley maker of graphics processors and its new German partner, which ranks among the world’s largest auto parts makers, will develop a computer that utilizes Nvidia’s deep learning software and Drive PX processor, Bosch CEO Volkmar Denner announced at Bosch’s ConnectedWorld conference in Berlin.

While Nvidia’s core business remains processors that power ever more complex gaming graphics, its chips turn out to be highly effective at running artificial intelligence software in autonomous vehicles. The particular branch of AI that Nvidia pursues, so-called deep learning, is intended to allow vehicles equipped with its technology to continually get smarter in terms of recognizing road conditions and patterns as they are fed data from external sensors and a vision system.

Stanford scientists create three-dimensional bladder reconstruction (Stanford News)

This could change with a new computer vision technique developed by Stanford researchers that creates three-dimensional bladder reconstructions out of the endoscope’s otherwise fleeting images. With this fusion of medicine and engineering, doctors could develop organ maps, better prepare for operations and detect early cancer recurrences.

One of the technique’s advantages is that doctors don’t have to buy new hardware or modify their techniques significantly. Through the use of advanced computer vision algorithms, the team reconstructed the shape and internal appearance of a bladder using the video footage from a routine cystoscopy, which would ordinarily have been discarded or not recorded in the first place.

To test the accuracy of their bladder reconstruction, the team first created a model based on endoscopy images taken in a 3D-printed bladder, known as a tissue phantom. Because the details of the tissue phantom are known, the researchers could directly compare their rendering to the real thing. According to Bowden, tissue phantoms provide a standard for biological modeling analysis. The team found that its three-dimensional rendering matched the tissue phantom with few errors.

Microscan Presents New Capabilities of MicroHAWK Barcode Readers and Machine Vision Solutions (PR Web)

Microscan experts will host live demonstrations of MicroHAWK imagers – the world’s smallest barcode readers. MicroHAWK, in combination with the browser-based WebLink user interface, allows users to adjust reader settings remotely and to monitor results in real time from any web-enabled device without installing any software. With MicroHAWK and WebLink, production line changeovers can be accommodated easily with just a few clicks from any web browser. Visitors to the exhibition will learn how MicroHAWK readers are able to read any barcode on any part, how they can readjust for new barcodes on the fly, and how their auto-enhance image capture capabilities can read barcodes on PCBs of different colors. MicroHAWK barcode readers perform right out of the box with as little as one minute of setup time for new jobs. The software that powers MicroHAWK is stored on the readers themselves, offering universal job portability so that switching decode parameters, changing devices, and saving and recalling jobs is simpler than ever.

Uru uses computer vision to revolutionize video advertising
(Cornell)

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *