Daily News Digest Featured News

AR app shows body’s internal organs, IBM Watson to put AI to work at stopping brain bleeds, computer vision program reads lips.

AR and computer vision

News Summary

Augmented Reality

Alibaba Expands Into Augmented Reality, IoT Cars (Investopedia)

WayRay makes an AR navigation dashboard dubbed the Navion that provides directions and information in front of the driver.

The startup is also partnering with Banma Technologies, which is a joint venture between Alibaba and SACI, to develop an AR navigation system that includes entertainment for a vehicle that will come out in 2018. According to TechCrunch, WayRay boasts it will be the “world’s first vehicle in-production with a holographic AR head-up display.

In July, Alibaba announced the new internet-connected car that was rolled out for the China market as it looks to expand into the burgeoning market for connected devices otherwise known as the Internet of Things.

The OS Car RX5 sport utility vehicle enables drivers to perform certain functions such as booking and purchasing parking spaces, pay for gas and even get coffee via its mobile payment system, Alipay, all while behind the wheel. Owners of the car are given an “Internet ID” that enables the operating system to recognize the user and make recommendations for the type of music, the temperature in the car or provide nearby restaurant info based on previous trips.

You Can Ban a Person, But What About Their Hologram? (SingularityHub)

Due to outstanding warrants for his arrest, Cozart can’t even return to Chicago, and so unable to perform in the area, he took the innovative approach of performing from California, but as a hologram beamed into the Indiana music festival. But even that was too much for police, and the performance was immediately stopped.

Demonstrators hoping to voice their dissent for the legislation staged the world’s first hologram protest as a way to circumnavigate the rules. To pull off the stunt, the protesters hired a production company to film marchers walking along a street at another location and then projected that footage as a hologram on top of a translucent fabric which was constructed outside the buildings.

Focus is key to blending virtual objects with the real world (engadget)

Now, the startup is taking on a bigger challenge with Light Field, its “mixed reality platform” that can visualize objects “at multiple focal planes”. That means that it can offer variably focused virtual objects in the real world.

Now, the startup is taking on a bigger challenge with Light Field, its “mixed reality platform” that can visualize objects “at multiple focal planes”. That means that it can offer variably focused virtual objects in the real world.

Educational app shows body’s internal organs using augmented reality (NTD.tv)

He’s using ‘Virtuali-Tee’, a new augmented reality educational app that seemingly provides a window to the body’s internal organs.

The augmented reality app works using a mobile device’s camera, tracking the unique squared pattern on Barton’s blue T-shirt.

This allows the technology to digitally construct a seemingly lifelike window to the body, showing organs such as the heart, lungs and stomach.

Pottery Barn introduces augmented reality app (San Francisco Chronicle)

The San Francisco retailer worked with Google to create 3D Room View, an app tailored for Pottery Barn. The app allows a shopper to pick an online catalog photo of a couch, for example, then use a smartphone to view the image overlaid on a spot in a living room to see if it matches the floor, blinds and wall colors.

Ram Adds New Vehicles to the Ram Augmented Reality Upfit Configurator (The News Wheel)

The Ram Augmented Reality Upfit Configurator is an advanced program that creates computer-generated images of Ram commercial vehicles for prospective customers to explore. Those who use the Upfit Configurator can virtually travel around the vehicle’s exterior and even explore its cabin.

Artificial Intelligence/Machine Learning

How Twitter could use artificial intelligence to cut online harassment (Fox News)

Earlier this month, Ed Ho — the VP of Engineering at Twitter — announced that Twitter would start “limiting the functionality” of online abusers if they see repeated offenses. One technique is to block the user so that only direct followers see their posts.

What’s missing is real-time analysis — someone could easily send you a death threat today out of the blue, and Twitter would only block that account if the social network noticed a pattern.

Experts say artificial intelligence could be the answer, however. Using machine learning — and analyzing massive number of tweets — AI could reduce or even eliminate Twitter abuse.

Bank of England trials artificial intelligence and blockchain in bid to stay ahead of the pack (The Telegraph)

The central bank is testing an artificial intelligence system with Canadian startup MindBridge AI to allow it to spot abnormalities in financial transactions and “explore the benefit of machine learning technology for analysing the quality of regulatory data input.”

IBM Watson partners with MedyMatch to put artificial intelligence to work on stopping brain bleeds (Healthcare IT News)

IBM Watson Health and Israel-based MedyMatch Technology are joining their AI forces in hospital emergency rooms to help doctors detect intracranial bleeding resulting from head trauma and stroke.

The companies said they will develop interoperability between MedyMatch’s application and IBM Watson Health Imaging’s offerings, and IBM will distribute the MedyMatch brain bleed detection application through its sales channels.

The MedyMatch algorithm uses deep learning, machine vision, patient data and clinical insights to automatically highlight for a physician areas that might indicate the potential presence of cerebral bleeds.

Machine Learning Shows Potential For Improving Quantum Computers (Tom’s Hardware)

Quantum computing and machine learning should be able to feed off each other quite well, because quantum computing should be much faster at solving optimization problems with thousands of variables, while machine learning is all about statistically optimizing for a result from large amounts of data. Therefore, it’s likely that once we achieve quantum supremacy, quantum computer-powered artificial intelligence could be the “smartest” there is (at least for certain classes of problems).

DARPA’s latest idea could put today’s Turing-era computers at risk (PC World)

The goal of a new DARPA project is to create computers that think like biological entities and are continually learning.

Such computers would start learning slowly, much like children. The learning model would become more flexible as a system matures and gains experience. If it works, the computer will be able to extrapolate more answers depending on the situation, much like humans.

Sex robot to practice your chat up lines on – new AI doll only works if you SEDUCE her (Express)

Samantha has been equipped with the latest developments in AI, and if touched on places such as her hands or her hips, responds with statements such as “I love this” and “nice and gentle”.

However, much like a real person, the robot can show insecurities as well as a fear of rejection, but as she gets more in the mood she can heighten the experience by requesting songs, with one example showing the robot asking for Ed Sheeran’s music.

Machine Learning, NLP Help with Physician Skill Benchmarking (Health IT Analytics)

When researchers from several top British universities applied machine learning tools to free-text questionnaires filled out by providers’ peers, they discovered that the algorithms agreed with human assessments of the same documents up to 98 percent of the time.

The algorithms were able to identify and categorize terms that related to the subject’s interpersonal skills, professionalism, and respect among his or her colleagues with a high degree of accuracy, indicating an opportunity for healthcare organizations to engage in more qualitative assessment activities in the future.

By identifying key terms within the text and categorizing them according to pre-defined criteria, machine learning algorithms can start to automate the process of using free text for quality benchmarking and personal assessments.

Provenir Unveils Amazon Machine Learning Adapter (Yahoo! Finance)

Provenir, provider of risk analytics and decisioning solutions, today unveiled the new Provenir Adapter for Amazon Machine Learning for financial institutions seeking to add machine learning to their toolkit without the time and resources typically associated with development. The latest in Provenir’s suite of adapters automatically feeds the predictive score returned by the Amazon Machine Learning model into the risk decisioning process. The Provenir Platform then automates that process, instantly executing a pass, fail or refer result from a risk score.

Video Search Becomes A Reality!! With Google’s Machine Learning API That Recognizes Objects in Videos (Trendin Tech)

This new Video Intelligence API allows developers to build applications that have the ability to extract objects from a video. Up until now, this has only been done through still images rather than videos, but moving forward this new API will allow so much more. Developers will be able to construct creative applications that let the user search for information relating to what it is they’re watching. This brings a whole new meaning to the words “watching a program” when you can find out more in-depth information about any aspect of the program you want.

As well as being able to extract metadata from the video, the API also lets the user tag scene changes.

Y Combinator has a new AI track, and wants startups building ‘robot factory’ tech to apply (TechCrunch)

Y Combinator Partner Daniel Gross today announced that the accelerator-turned-venture-fund will offer its first “vertical” track exclusively for AI startups.

The “accelerator VC” SOSV, runs programs for very early stage biotech-, hardware-, food-, mobile and smart city-startups; Acceleprise and AngelPad back b2b startups only; Yield Lab, Thrive and Terra are agriculture tech accelerators; while Starburst and LightSpeed Innovations work with aerospace and aviation startups.

NEC using artificial intelligence to prevent bus accidents in Singapore (ZDNet)

The Japanese giant is taking historical data from a bus driver’s work records, telematics data produced by each bus, and observations made by on-board data scientists to determine whether a driver is likely to cause an accident in the next three months, and intervene before SMRT has to deal with the costly and potentially life-threatening aftermath.

In addition to preventing bus accidents, NEC is also working on creating a smart airport in Singapore; traffic management in the city-state to prepare for the 7 million people expected to reside in Singapore by 2027; bettering public transportation; and putting smart signage in place to show motorists up-to-date information about congestion.

Computer Vision/Machine Vision

Body language analysis software spots criminals in the crowd (Russia Beyond the Headlines)

The data analysis laboratory at the National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) has developed software capable of detecting non-standard and potentially dangerous human behavior. The program has a variety of potential applications, such as identifying criminals in crowds.

All the algorithm needs is to be “shown” a moving person against a static background, and it then automatically analyzes the video and identifies the coordinates of the person’s head, legs, elbow and knee joints, and gathers statistics on how they move. The coordinates are refreshed at a rate of 30 times per second. The software then builds a 3D model that moves synchronously with the person on the video footage.

The software has other potential uses: for example, it can measure time spent by a shopper in front of a particular display window to assess their interest in certain items for subsequent targeted advertising.

New computer software program excels at lip reading (PHYS.org)

The AI system uses computer vision and machine learning methods to learn how to lip-read from a dataset made up of more than 5,000 hours of TV footage, gathered from six different programmes including Newsnight, BBC Breakfast and Question Time. The videos contained more than 118,000 sentences in total, and a vocabulary of 17,500 words.

They found that the software system was more accurate compared to the professional. The human lip-reader correctly read 12 per cent of words, while the WAS software recognised 50 per cent of the words in the dataset, without error. The machine’s mistakes were small, including things like missing an “s” at the end of a word, or single letter misspellings.

Advanced computer imaging tech tapped to reconstruct bladder (Xinhuanet News)

Researchers have used images from cameras attached to long, flexible instruments called endoscopes and then advanced computer imaging technology to create a three-dimensional computer reconstruction of a patient’s bladder.

To test the accuracy of their reconstruction, the team created a model based on endoscopy images taken in a 3D-printed bladder, known as a tissue phantom. Because the details of the tissue phantom are known, the researchers compared their rendering to the real thing and found they matched with few errors.

Watch a new self-driving car using a similar camera-based approach as Tesla (Electrek)

While Tesla’s new Autopilot program aimed at achieving fully self-driving capability is making extensive use of computer vision through its 8 cameras, it also uses a forward-facing radar, 360-degree ultrasonic coverage, and high-precision GPS data.

The lack of lidar sensors is what differentiate them from most other self-driving programs, but Xiao’s AutoX is taking it one step further and doesn’t use any radar, sonar or GPS – only cameras.

For their current prototype, Xiao told Business Insider that they simply bought a few $50 cameras from Best Buy and strapped them to a 2017 Lincoln MKZ. That’s their first prototype hardware suite to develop their computer vision and self-driving software.

Author:

allen-taylor
Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *