Daily News Digest Featured News

AR on the battlefield, machine learning forecasts Congress, synthetic humans teach computers how real people act

machine vision self-driving cars

Top Stories

AR

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

AI

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

CV

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

News Summary

Augmented Reality

How Caterpillar Inc. is Using Augmented Reality (AutomationWorld)

At Caterpillar, AR guides professionals through machinery troubleshooting and repair, instantly dispensing data that was traditionally only available via equipment manuals.

Augmented reality, 3D printing and a shot at the moon (livemint)

The demonstration was set up so that the audience saw on the large screen what Satyanarayana saw on his AR headset: he could interact with the hologram of a virtual machine or any other object he wanted to place, watch videos, and even interact with the “avatar” of his colleague, who too wore a headset and connected with Satyanarayana from the hall next door.

 

Augmented Reality Can Make Good-for-You Food Look More Attractive, Making It Easier to Eat Well (NextReality

Using an AR head-mounted display to view the food in front of you, Okajima details the way in which one can modify food texture and color in real time while still keeping the original food item intact. He believes using a head-mounted display, or possibly even projection mapping, could be the “future of the food industry.”

All in all, the future of dieting may be a bunch of us wearing AR headsets to view and eat what looks like a piece of gluttonous cake, when in reality it’s a much healthier alternative.

Weapons Engineering: Augmented Reality on the Battlefield (Engineering.com)

ODG is working with the Office of Naval Research (ONR) and TechSolutions to give U.S. Marine signals intelligence (SIGINT) specialists an augmented environment for the battlefield by designing ODG R series augmented-reality glasses to create digital overlays of real-time information. The partnership revolves around designing and modifying the X-6 system.

It has 1280 x 720 stereoscopic optics to provide a virtual display and built-in 802.11 and 802.16 communications for transferring and receiving data. There is a 1.5 GHz CPU embedded for processing and a rechargeable lithium ion pack. The glasses only weigh 4.5 ounces, which is not too bad compared to many commercial headsets.

Recent HoloLens Activity From the ONR

This capability allows small-unit leaders to practice different battlefield situations and view the results of their simulations and games, like the way professional athletes will watch footage of practices or games to improve their skills.

Voxware to Debut Augmented Reality Solution at ProMat 2017 (Yahoo! Finance)

Voxware, a leading provider of cloud-based voice solutions, announced today the introduction of its Augmented Reality (AR) offering. This latest addition to the Voxware line brings voice and scanning together with vision and image capture to create the most comprehensive personnel optimization solution in the category. The first two workflows addressed by the AR solution use smartglass technology to improve ‘Packing & Shipping’ and ‘Returns & Receiving’ in the distribution center. Voxware will showcase the product at ProMat 2017 in booth S3667 beginning today.

At ProMat 2015, Voxware introduced Intellestra, a first-of-its-kind predictive modeling and supply chain analytics platform to enable companies to manage the entire supply chain from picking, shipping and distribution of products in real-time.

In the ‘Packing & Shipping’ AR workflow, as items are selected in the warehouse and prepared for shipment, a packer will be able to capture a photo with his or her smartglasses to ensure the correct item is selected, is packed properly and is leaving the warehouse in pristine condition. This streamlined process, all available for a remote supervisor to monitor in real-time, allows personnel currently deployed as order checkers to be re-assigned to higher-value functions in the warehouse.

Artificial Intelligence/Machine Learning

Toyota to use artificial intelligence in search for new batteries (Reuters)

Toyota Motor Corp will use artificial intelligence in new research to speed up the discovery of advanced battery materials and fuel-cell catalysts to power electric and other emission-free vehicles, the company said on Thursday.

Brian Storey, the lead TRI researcher for the new program, said artificial intelligence will be used to identify new possible materials for batteries and fuel and run computer tests to narrow down the field for simulation tests by researchers.

Top 4 Machine Learning Use Cases for Healthcare Providers (Health IT Analytics)

IBM Watson is rolling out a clinical imaging review service to help identify aortic stenosis, while Microsoft is targeting imaging biomarker phenotyping to supplement its cancer research efforts.

Academic institutions are also getting in on the ground floor of advanced pattern recognition. At Indiana University-Purdue University Indianapolis, researchers are turning machine learning algorithms loose on pathology slides to predict relapse rates for acute myelogenous leukemia.

Using natural language processing (NLP), machine learning algorithms can turn images of text into editable documents, extract semantic meaning from those documents, or process search queries written in plain text to return accurate results.

NLP can also be put to work on collections of free-text, such as unstructured clinical notes in the EHR, academic journal articles, patient satisfaction surveys, or other narrative data.

At Beacon Health Options, a behavioral health management company, machine learning can clarify a fuzzy diagnosis process and help forestall mounting complications in complex patients.

On the clinical side, researchers at the Icahn School of Medicine at Mount Sinai (ISMMS) are using algorithms to distinguish between two heart conditions with very similar presentations.

Machine learning could help reduce the rising threat of ransomware, which is a piece of malware that prevents organizations from accessing certain files or components of infrastructure, as well as more traditional security threats.

Machine learning: Trained with MRI (SpectroscopyNow)

The participants were given a diffusion tensor imaging (DTI) MRI scan. Thus, by measuring water diffusion over time, the team could obtain some measure of the integrity of white matter pathways within the cerebral cortex. The team then compared the fractional anisotropy measurements between the two groups and found statistically significant differences. When they reduced the number of voxels involved to a subset that was most relevant for classification and carried out the classification and prediction using the machine learning approach they were able to classify depressed or vulnerable individuals against the healthy controls. The experiments showed how predictive information is distributed across brain networks rather than being highly localized.

Vanderbilt affiliates’ PredictGov uses machine learning to forecast Congress (Huffington Post)

Users can look up any pending bill on PredictGov or find predictions through its partner, legislation tracker GovTrack, which now includes a “prognosis” line in its overview of each bill.

“Beyond predicting the likely outcome, we provide understanding of the context of the bill,” said Nay, who studied at the Vanderbilt School of Engineering.

Google turns to artificial intelligence to deal with YouTube boycott (Financial Review)

Google is implementing artificial intelligence and machine learning across YouTube and its Google Display Network in an attempt to reduce the risk of advertisers appearing next to undesirable content, including extremist content and hate speech.

Google is also scaling up its team for rapid response so that if something is missed by its AI the advertisement can be removed from an offending video within a few hours, as well as new advertiser controls to exclude certain subjects.

The technology company is adding three new exclusions to its “sensitive subjects umbrella” – sexually suggestive, profanity, and sensational and shocking.

UK military lab launches £40,000 machine learning prize (Wired UK)

The challenge is aimed at startups, recent graduates and others working in computer science and has an overall prize pot of £40,000 – with £20,000 allocated for the winners. Borrett says DSTL has decided to create the challenge based on its need to have technologies and partners that are able to “solve our exploitation of information to inform decision making”.

The second task, which is arguably harder to crack, involves classifying vehicles in satellite images. A series of 100m x 100m satellite images have been made available and the challenge is to identify, with an automated system, the types of vehicles (e.g. saloons), how many there are and their colours.

Monetate launches its machine learning-powered personalization engine (VentureBeat)

And today, marketing personalization company Monetate has announced its Intelligent Personalization Engine, a new platform that helps B2C brands provide individualized experiences to every consumer. Powered by machine learning, Monetate’s new technology creates — in real time — personalized interactions that the company claims will improve customer experience and lift conversions.

XERO SEES BIGGEST GROWTH AFTER INTRODUCING MACHINE LEARNING (Business Cloud)

Its system can now code invoices for small businesses, categorise expenses and recommend accounting practices to a potential client.

“Rather than just keying in data, they’re interpreting the output – and with the power of machine learning, they’ll provide higher level advisory services that help clients feel in control of their finances which is a key human function that cannot be replaced.”

Stanford researchers create deep learning algorithm that could boost drug development (Stanford News)

To make molecular information more digestible, the researchers first represented each molecule in terms of the connections between atoms (what a mathematician would call a graph). This step highlighted intrinsic properties of the chemical in a form that an algorithm could process.

With these graphical representations, the group trained an algorithm on two different datasets – one with information about the toxicity of different chemicals and another that detailed side effects of approved medicines. From the first dataset, they trained the algorithm on six chemicals and had it make predictions about the toxicity of the other three. Using the second dataset, they trained it to associate drugs with side effects in 21 tasks, testing it on six more.

In both cases, the algorithm was better able to predict toxicity or side effects than would have been possible by chance.

Computer Vision/Machine Vision

Mode.ai’s visual search finds clothes that match your style (VentureBeat)

Mode.ai is making bots that use your photos to identify an item of clothing, then show you where to buy it (or a visually similar item of clothing) online. The bot can also recognize photos of home decor products and share trending topics from online retailers as varied as Amazon and Louis Vuitton.

The bot also gives fans a chance to find a similar style when a celebrity is wearing something unique or limited edition.

Mode.ai

USB3 Vision computer can do assisted surgery (LinuxGizmos.com)

Active Silicon’s USB3 Vision Processing Unit (USB3 VPU) is designed for a variety of high-end industrial and medical computer vision applications, including its primary application of computer assisted surgery. The USB3 VPU has four inputs for USB3 Vision cameras and four 3G-SDI outputs configured to output two channels of 3G-SDI video, each with a duplicate output.

The USB3 Vision standard is based on the GigE Vision standard for GbE-connected cameras, but instead uses USB 3.0-connected cameras. The cameras have a faster 350MB/s bandwidth, compared to 125MB/s for GigE cameras, although their cables only reach up to 5-10 meters compared to 100 meters for GigE.

AirSpace’s New Drone-Predator Can Lock on and Shoot Down Enemies (Inverse)

AirSpace’s drone security system uses its own machine vision system to spot, chase down, and net-gun enemy drones. This net can then be used to carry the target drone away, or back to a separate base to be studied — but just as likely, these little homing drones will end up being suicidal by their very nature.

And as machine vision algorithms improve, drones could be programmed to complete a kamikaze mission even if their connection to an operator is disrupted.

The drone even incorporates some mid-air forensics, using an array of sensors broadly similar to those used to detect bombs and other dangerous substances at airports, which could help it to figure out which of its abilities to actually use on a given drone, once that drone has been captured, or even before.

WCX17: 3M enabling machine vision for self-driving vehicles (SAE International)

3M is leveraging its 78-year history in lane markings and retroreflective traffic sign material to develop products for the automated driving environment.

3M scientists are developing lane marking solutions that are more durable and easier for vehicle sensors to read than current solutions. They’re also working on advanced traffic signs.

machine vision self-driving cars

Synthetic humans help computers understand how real people act (New Scientist)

The team created their synthetic humans using the 3D rendering software Blender, basing their work on existing human figure templates and motion data collected from real people to keep the results realistic.

The team then generated animations by randomly selecting a body shape and clothing, and setting the figure in different poses. The background, lighting and viewpoint were also randomly selected. In total, they generated more than 65,000 clips and 6.5 million frames.

With all this information, computer systems could learn to recognise patterns in how pixels change from one frame to the next, indicating how people are likely to move. This could help a driverless car tell if a person is walking close by or about to step into the road.

As the animations are in 3D, they could also teach systems to recognise depth – which could help a robot learn how to smoothly hand someone an object without accidentally punching them in the stomach.

The wearable that can tell you when someone is checking you out (Daily Mail)

Called Ripple, the device uses cameras and computer vision to sense the look of others around the user and determine who is interested based on their stare.

‘When it finds someone, it gives you sensorial feedback, reflecting the excitement you feel when meeting someone special.’
‘If the attraction is mutual then its tentacles will move in reaction to their gaze, amplifying the language of seduction between two people’.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *