Daily News Digest Featured News

Future of medical procedures with AR, AutoDraw replaces scribbles with stock images using machine learning, ‘Similar Items’ uses computer vision & ML to see products better

computer vision

News Summary

Augmented Reality

University of Maryland Demonstrates the Future of Medical Procedures with Augmented Reality (Next Reality)

Amitabh Varshney, a computer science professor and director of the Augmentarium facility at the University of Maryland, along with four other physicians and researchers, showed off the use of augmented reality in medical procedures on March 27 with a demonstration of how it could be used in intubation with the help of ultrasounds.

The ultrasound images are overlayed on the patient via the AR headset, which helps aid in insertion of a tube without every glancing away.

Gene Munster And His Crazy Crystal Ball (Benzinga)

Your local ATM could recognize your face within five years. Your taxes could be prepared by a robo-accountant. Your lawyer might be reduced to an algorithm fully versed in legal precedent.

Your ear canal is a better form of identification than a fingerprint, let alone a driver’s license, and might become the prevailing form of preventing cyber fraud.

Augmented reality wearables might soon identify a person you should recognize but can’t recall the name.

He said the telephone as the prime engine of augmented reality is good for the next five years, after which a wearable device — one that projects images and information back to the eye like an all-purpose screen — will eventually supplant the phone.

Artificial Intelligence/Machine Learning

Artificial intelligence and drones ‘future of policing’ (BBC)

Mr Farrar pointed out the usefulness of drones in the case of the body of a woman murdered 20 years ago being found at Wentwood Reservoir, near Newport.

“We couldn’t have done that by foot and conventional means,” said Mr Farrar, speaking as Gwent Police celebrated its 50th anniversary.

Boffins unveil tool that uses artificial intelligence to ensure you always look beautiful in selfies (The Sun)

It can change the perspective of a typical selfie, making it look as if the images has been taking from a more flattering distance.

The app can also blur the background and mimic the style of other photos and automatically apply it to others.

This could mean, for instance, you turn a colour image into a black and white one.

Pipe Leak Detection System Employs Machine Learning to Limit Errors (Xconomy)

The Southwest Research Institute (SWRI) is developing a machine learning system that can quickly process image data from various cameras, which detects leaks in pipes that deliver various types of environmentally dangerous materials, from methane to oil.

SWRI’s device would use thermal or infrared cameras equipped with infrared sensors to watch for the leaks—cameras that can see what the naked eye can’t. The organization may target the thermal cameras because, at about $10,000, they are about a fifth of the cost of the infrared cameras typically used to detect methane, Araujo says.

The data from the images recorded by a SWRI camera are processed by algorithms that SWRI is developing, which feed into a neural network that’s being trained to determine whether there’s a leak or not.

Advanced Analytics Go Beyond Weather Forecasting (The Maritime Executive)

For shipping, StormGeo has developed algorithms for efficient and safe weather routing of ships, which reduced overall fuel consumption by approximately 600,000 tons per year. This is approximately 1.9 million tons of CO2 per year, which is equivalent to a CO2 consumption of 430,000 cars. StormGeo routes approximately 6,000 ships per month and has onboard systems installed on more than 5500 vessels.

AI on the go: WellCare builds artificial intelligence into its mobile app (FierceHealthcare)

To improve patient care out in the field, WellCare is turning to a combination of artificial intelligence and mobile technology.

The company has rolled out an early version of its patient-focused system—called Care Plan—in three states so far and plans to deploy it to its full membership by the end of the year. The system works by analyzing data that members enter into the MyWellCare mobile app, along with WellCare’s own data, to suggest treatment plans and interventions for potential health issues it identifies.

VW Teams With Mobvoi to Improve In-Car Artificial Intelligence (The Drive)

Mobvoi already has a foothold in the automotive sector. The company has developed a rearview mirror that integrates voice controls for navigation, media, and instant messaging, according to Engadget.

How Artificial Intelligence is Helping the Visually Impaired (Huffington Post)

Recognizing the life-changing implications AI could have for the visually impaired, Israel based OrCam has made it its mission to use AI technology and software to improve the lives of the world’s 285 million blind and visually impaired persons. As the brainchild of Amnon Shashua and Ziv Aviram, cofounders of pioneering collision avoidance technology company Mobileye, OrCam’s flagship product, MyEye, uses advanced AI technology to help the visually impaired recognize faces, read words and text, identify currency by its denomination, and even distinguish between products and brands.

GG: OrCam MyEye is a small wearable device that has the ability to read printed text from any surface, including books, newspapers, restaurant menus, food labels, and street signs, as well digital text on smartphones, computers or television screens. The device also recognizes faces of individuals as well as identifies products and the denominations of currency notes.

The facial recognition functionality identifies faces of individuals who were previously entered in the device’s memory. The name of the individual is announced to the OrCam user when that person is in OrCam MyEye’s field of vision.

GG: With OrCam, individuals who have difficulty reading may now read their own mail, any books of their choosing or their morning newspaper whenever and wherever they want. The facial recognition feature allows blind and visually impaired individuals to be more active and comfortable in social settings, enabling them to know when a family member, friend or colleague is nearby, as the device identifies them once they are in sight.

Artificial intelligence watches Bob Ross paint and the results are disturbing (Bozeman Daily Chronicle)

A piece of digital art by Alexander Reben, an artist and roboticist interested in human-machine relationships. He ran an episode of Bob Ross’ classic painting show through an artificial learning machine that tried to find images that are not there.

Google’s web-based AutoDraw uses machine learning to replace scribbles w/ stock images (9to5 Google)

Google’s latest A.I. Experiment is not only a fun and clever tech demo, but it also makes for a useful drawing tool. AutoDraw leverages machine learning to replace your scribbles with illustrations from talented artists in order to quickly draw something.

The web app also features a regular free-handed drawing mode, as well as the ability to add text and other generic shapes. Users can choose between 24 colors with a convenient fill tool. Other settings including changing the canvas size/orientation and the ability to download as a PNG file.

Four Common Mistakes in Machine Learning Projects (Datanami)

Here are the top four typical mistakes made by businesses during machine learning projects.

A/B testing to verify the efficacy of a machine learning model is perhaps the only real way to prove its true business value, but too many organizations conduct these tests with more variables than just the model. That’s a fundamental error.

During any machine learning project, determining metrics is one of the most important factors impacting the success. After all, you won’t get what you don’t ask for. The metric set is the metric that gets optimized, and therefore all other factors will be ignored. Choose the wrong metrics to measure success, and the project is already off track as optimization may have led to a change for the worse.

Let’s say all interested parties have agreed a task is worth solving, and you even have the luxury of knowing how to apply machine learning to the problem. This is exactly the time when you should cross-check if the objectives noted are actually the ones you want to achieve. Even what seems like little shifts in determining the way to a solution, can eventually lead you to situation where machine learning can no better solve a task than the less advanced methods before it.

The metrics, algorithms, and testing may indicate success, but if implemented in the wrong way, the model may be blamed unfairly. Therefore, you must ensure that all efforts towards enhancing processes through a machine learning project make sense, or else you risk spending time ensuring everything is technically perfect, but to no avail.

UK watchdog reveals use of machine learning (IPE)

The UK’s Pensions Regulator (TPR) has built a machine-learning tool to help it focus on pension schemes most at risk of breaching its guidelines.

Jackson said: “There are groups where most pension schemes are expected to make a return on time, and other groups where a large proportion of schemes are expected to be late or never make a return. This is great as we can tailor our communication strategy differently for each group: light touch when we expect schemes to make returns on time, firmer where we expect a scheme to be more problematic.”

The work can also help the regulator keep on top of other data that might not have been updated, Jackson added, such as contact details of the scheme and key people.

20+ Machine Learning as a Service Platforms (Butler Analytis)

The platforms listed below vary in sophistication.

Automatic Business Modeler (ABM) is a tool for automatic prediction of customers behavior.

algorithms.io provide a cloud hosted service to collect data, generate classification models and score new data.

Amazon Machine Learning provides visualization tools and wizards that guide users through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. Once models are ready, Amazon Machine Learning makes it easy to get predictions for an application using simple APIs.

Civis Analytics provides a machine learning platform that also includes large demographic databases and consulting services.

Dataiku is profiled as a collaborative data science platform. It comes with all the tools most data scientists would need, along with a visual interface.

Domino can be implemented on-site or in the cloud. It uses all the commonly used tools and provides a working environment with features such as version control, collaboration and deployment tools.

Google Prediction API can integrate with App Engine, and the RESTful API is available through libraries for many popular languages, such as Python, JavaScript and .NET. The Prediction API provides pattern-matching and machine learning capabilities.

Microsoft Machine Learning Studio features a library of time-saving sample experiments, R and Python packages and best-in-class algorithms from Microsoft businesses like Xbox and Bing. Azure ML also supports R and Python custom code, which can be dropped directly into your workspace.

MLJAR.com – human-first machine learning platform. It makes machine learning algorithms search and tuning painless. User need to upload a dataset, select input and target attributes and mljar will find best matching ML algorithm.

Yottamine Predictive Service allows for building models or making predictions in two simple steps.

Computer Vision/Machine Vision

Rainforest CSI: How We’re Catching Illegal Loggers with DNA, Machine Vision and Chemistry (World Resources Institute)

These efforts are evolving to include computer assistance. The “Xylotron”, developed by the U.S. Forest Service, is forging a new frontier in wood anatomy by employing algorithms akin to human facial recognition software on images of microscopic wood slides. The power in this method comes from the system’s ability to compare thousands of images to identify species based on subtle diagnostic characteristics that would be hard to identify with the human eye.

New computer vision challenge wants to teach robots to see in 3D (New Scientist)

Computer vision is ready for its next big test: seeing in 3D.

So the ImageNet team say it’s time for a fresh challenge in 2018. Although the details of this competition have yet to be decided, it will tackle a problem computer vision has yet to master: making systems that can classify objects in the real world, not just in 2D images, and describe them using natural language.

Building a large database of images complete with 3D information would allow robots to be trained to recognise objects around them and map out the best route to get somewhere. This database would largely comprise images of scenes inside homes and other buildings.

The existing ImageNet database consists of images collected from across the internet and then labelled by hand, but these lack the depth information needed to understand a 3D scene. The database for the new competition could consist of digital models that simulate real-world environments or 360-degree photos that include depth information, says Berg. But first someone must make these images. As this is difficult and costly, the data set is likely to be a lot smaller than the one for the original challenge.

Vision System Performs Flawless Blister-Pack Inspection (Assembly Magazine)

The system lets operators program various product-related tolerances (such as color and shape variations) for the same blister. It also identifies and localizes error clusters to eliminate error causes.

Depending on the application, two, three, four or six cameras are installed in a cascade arrangement in a small space under a closed stainless steel hood. A water-cooling system offsets heat generated by the cameras.

Each camera features three 0.33-inch scan CCD sensors that supply 30 images at a resolution of 1,024 by 768 pixels (pixel size is 4.65 microns). During operation, the lens breaks entering light into its red, green and blue components and guides them to each respective sensor. This action results in precise color separation with limited variation of brightness or color.

Computer Vision Systems are Surprisingly Easy to Dupe (Inverse)

Computer systems designed to recognize images aren’t that smart. Researchers have been able to create a set of glasses that makes an A.I. system think Reese Witherspoon is Russel Crowe, while others have been able to fool systems into seeing everyday household objects in psychedelic-looking patterns.

That dependence on specific pixels has led to some bizarre outcomes, where a computer was fooled by the University of Wyoming into thinking these patterns were common images:

computer vision

Renesas unveils ADAS autonomy platform (New Electronics)

For the implementation of demanding algorithms, the platform provides system manufacturers the option to select the most suitable IP cores – including hardware accelerators.

The R-Car V3M is the latest SoC from the autonomy platform, optimised primarily for use in smart camera applications, as well as surround view systems and lidars.

For smart camera applications, the R-Car V3M focuses on enabling new car assessment program features. The integrated ISP enhances the raw image quality from the camera sensor and makes the image ready for computer vision. The system features a single DDR3L memory channel.

NLP Logix on How Data Can Improve MOW Track Inspection (Progressive Railroading)

Applying data science to specific aspects of the rail industry can provide an opportunity to optimize maintenance-of-way operations. When it comes to detecting rail flaws or identifying worn-out components, track inspection is increasingly being automated. Traditional track geometry and rail-profile data is being augmented with video images and machine vision systems that can learn from and analyze data to produce predictive maintenance models.

Google’s new ‘Similar items’ feature in Image Search uses machine learning to see products (9to5 Google)

Have you ever been looking at images on Google Images, notice something interesting (like an object within a specific photo), and wish you could dive deeper and get to know what exactly that object is?

Well, it looks like the folks over at Mountain View did, and as a result they’re rolling out a new feature, “Similar items”, within Google Image Search on both the mobile web as well as Android‘s Google app…

It works pretty much as you would expect, currently encompassing just fashion and lifestyle (think sunglasses, bags or shoes — but more will roll out in the coming months). If you are looking at, say, results for “designer handbags”, and your eye is caught by a particular photo, Google’s machine vision can now pick out the items in that photo and offer up easy shopping links.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *