- Today’s top AR news: One-click voice and video in AR with Unity.
- Today’s top AI/machine learning news: Self-taught AI better at predicting heart attacks than doctors.
- Today’s top computer vision/machine vision news: MS Flow adds CV API support.
- Voximplant makes voice, video comms in AR apps possible.
- Large industrial companies to help shape AR’s future.
- Hospital makes $119M bet on AR/VR center.
- Facebook camera brings AR to social network.
- PTC demos AR in popular CREO CAD software.
- HoloLens delivers first AR Easter egg hunt.
- This app allows customers to try on shoes in AR.
- Self-taught AI better at predicting heart attacks than doctors.
- AI will improve traveler experience with TravelBird.
- Sisense Pulse uses ML to trigger data anomaly alerts.
- ExtraHop combines AI, analytics, low-cost storage for intelligent workflows at scale.
- ServiceChannel brings first ML solution to facilities management.
- 5 machine learning projects you can’t overlook.
- AI to help in diagnosing cancer.
- Thread uses AI to help men buy clothes.
- Augmented Reality
- Caterpillar, Lockheed Martin, P&G Lead Effort to Shape Future of Augmented Reality (Industry Week)
- Hospital makes $119 million bet on virtual, augmented reality center (Healthcare IT News)
- Facebook camera will let users experience augmented reality (Digital Trends)
- PTC Demonstrates Augmented Reality Capabilities in Popular Creo CAD Software at DEVELOP3D LIVE (MCADCafe)
- Microsoft HoloLens delivers first ever augmented reality Easter Egg hunt (Mashable)
- THE HOLOGRAM YOU CAN LIVE IN (Ozy)
- Voximplant makes it easy to put voice and video communication in AR/VR apps (VentureBeat)
- Artificial Intelligence/Machine Learning
- How Artificial Intelligence will Improve Traveler Experience with TravelBird (PR Newswire)
- Sisense Pulse uses machine learning to trigger data anomaly alerts (TechCrunch)
- ExtraHop Combines Analytics, Machine Learning, and Low-Cost Storage to Deliver Intelligent Workflows at Scale (Yahoo! Finance)
- ServiceChannel Brings First-in-Industry Machine Learning Solution to Facilities Management (Yahoo! Finance)
- 5 Machine Learning Projects You Can No Longer Overlook, April (KDNuggets)
- Self-taught artificial intelligence beats doctors at predicting heart attacks (Science Mag)
- Development of artificial intelligence to help in diagnosing cancer (The Mainichi)
- How London startup Thread uses artificial intelligence and machine learning to help men buy clothes (Yahoo! News)
- Computer Vision/Machine Vision
- Microsoft Flow adds support for Computer Vision APIs (MS Power User)
- Soft Robotics debuts new vision system (The Packer)
- Line-Scan vs. Area-Scan: What is right for machine vision applications? (Novus Light)
- Baidu is acquiring xPerception, a US startup focused on computer vision (TechCrunch)
The (AR) hardware and software functional requirements guidelines will help AR technology companies develop products for industrial users.
These AR functional requirements documents will lead to technology that improves the performance and efficiency for manufacturers in a number of areas, including employee training and safety; factory floor and field services operations; machine assembly, inspection and repair; manufacturing space and product design, according to the groups.
Lockheed Martin, Caterpillar and Procter & Gamble initiated the guidelines development process as part of a project through the Digital Manufacturing and Design Innovation Institute (DMDII)), a UI LABS collaboration.
The documents address features that include:
- Hardware: Battery Life; Connectivity; Field of View; On-board Storage; On-board Operating System; Environmental; Inputs/Outputs and Safety.
- Software: Authoring; AR Content; Creating 3D Content; Deployment of AR Content and Internet of Things.
Hospital makes 9 million bet on virtual, augmented reality center (Healthcare IT News)
The 192,000-square-foot facility will house UNMCs Interprofessional Experiential Center for Enduring Learning, or iEXCEL program, which aims to help physicians and other healthcare professionals do clinical training exercises and develop surgical skills via advanced simulation technologies, virtual immersive reality, augmented reality and holographic technologies, officials say.
The center’s advancements will include the creation of 3-D/virtual and augmented reality content for clinical and surgical training modules; leading-edge technology such as the iEXCEL Helix – an extended 280-degree curved screen creating a 2-D/3-D immersive environment; laser-based “3-D iSpace,” a five-sided virtual immersive reality environment, and a 130-seat holographic auditorium.
Facebook camera will let users experience augmented reality (Digital Trends)
Snapchat-like features are just the start for Facebook. Get ready for augmented reality – starting with an upcoming update to the Camera app that will be the first way to experience AR on the social site, Facebook tells us.
PTC Demonstrates Augmented Reality Capabilities in Popular Creo CAD Software at DEVELOP3D LIVE (MCADCafe)
PTC (NASDAQ: PTC) today announced its demonstration of augmented reality (AR) capabilities, leveraging its recently-released Creo® 4.0 software at DEVELOP3D LIVE, the UK’s leading conference and exhibition, celebrating design, engineering and manufacturing technology and how it brings world-leading products to market faster. With AR, users can superimpose digital information onto real-world environments.
Users of Creo 4.0 can benefit from the integration of Creo with the ThingWorx® Studio software. The integration enables users to quickly and easily author multiple AR experiences from within Creo and share them with others using a unique identifier called a ThingMark. Using the free ThingWorx View app, colleagues can scan the ThingMark to see and interact with the design.
The game was unveiled this weekend in Los Angeles at the VRLA conference where Microsoft and a team of AR developers allowed me to enter a surrealist forest construct where holographic eggs could be found using the HoloLens headset.
You’re in the market for a new pair of shoes, and you have three options: Try on a pair in the store, order a pair online (only to potentially send them back) or aim your phone at your feet. The shoes miraculously appear, and you can swipe to change the style before adding them to your cart.
Trillenium is building an app for Asos that will allow customers to try on shoes by aiming their smartphones at their feet. Using artificial intelligence, the company has also brought the cost of scanning a product down from $2,500 to $10.
Developers using the Unity game engine will be able to integrate voice and video communications with one click. The first of its kind, Voximplant’s Unity SDK enables developers to enhance existing Unity virtual reality and augmented reality apps and games by easily adding real-time voice and video communication between users.
Developers can now install Voximplant’s Unity SDK with one click and utilize it just as they would any other Unity package. With this solution, Voximplant is introducing the first voice and video real-time communication CPaaS (Communications-Platform-as-a-Service) SDK created for Unity.
Artificial Intelligence/Machine Learning
DigitalGenius is providing TravelBird booking associates with an AI assistant to cut down on repetitive daily tasks, creating a layer of machine intelligence to enhance the quality and productivity of customer service experiences. The DigitalGenius Human+AI™ Platform is trained on historical customer service logs and provides AI-powered macro suggestions, automation of ticket tagging, auto-triaging, as well as automation of responses. This combination of human and machine intelligence helps customer service teams support increasing volumes faster, while unlocking more time for complex cases and meaningful conversations with customers.
Sisense introduced a new tool today called Pulse, which uses machine learning to trigger an alert when it detects results outside of normal parameters for a particular metric.
A user can set a Pulse alert to monitor a metric or KPI such as sales activity or win rate. The machine learning component watches the chosen metric and learns over time what’s normal. When it detects an anomaly, it sends an alert to the user. What’s more, it can determine how the metric has changed over time, so it doesn’t continue to trigger alerts for the new normal.
ExtraHop Combines Analytics, Machine Learning, and Low-Cost Storage to Deliver Intelligent Workflows at Scale (Yahoo! Finance)
ExtraHop, the leader in real-time IT analytics, today announced several major platform enhancements as part of version 6.2. These upgrades, supported by analytics and machine learning, provide IT teams situational insight and forensic capabilities required to make informed, data-driven decisions. The ExtraHop Platform’s analytics-first workflow now scales to deliver 40Gbps line-rate continuous packet capture, while flexible licensing models for storage eliminate the data tax for extended lookback, saving up to 50 percent over traditional Network Performance Monitoring and Diagnostics (NPMD) tools.
Machine learning proactively surfaces anomalies, which can be easily and rapidly investigated from high-level performance metrics to individual transactions to packets in a matter of clicks rather than hours.
ServiceChannel Brings First-in-Industry Machine Learning Solution to Facilities Management (Yahoo! Finance)
ServiceChannel, the leading SaaS service automation platform for facilities managers and contractors, today introduced advanced machine learning capabilities to its flagship solution, enabling faster, more automated data-driven decision making capabilities for facilities managers. ServiceChannel’s Decision Engine revolutionizes how facilities managers interact with their data through powerful prescriptive analytics and historical data in the service workflow, resulting in faster and smarter decision making.
Examples of plots included in the library are:
- Elbow plots
- Feature importance graphs
- PCA projection plots
- ROC curves
- Silhouette plots
The library has 2 APIs, one of which integrates tightly enough with Scikit-learn in order to control calls to its API (the Factory API). The other is more orthodox in nature (the Functions API), but either one would suffice depending on your desires.
Though all methods of feature selection share the common goal of identifying redundant and irrelevant features, there are numerous algorithms for approaching these related problems — this is an active area of research. In that regard, scikit-feature is for both practical feature selection and feature selection algorithm research.
Smile covers every aspect of machine learning, including classification, regression, clustering, association rule mining, feature selection, manifold learning, multidimensional scaling, genetic algorithms, missing value imputation, efficient nearest neighbor search, etc.
Smile now seems to be the go-to general-purpose machine learning library for those working in the Java and Scala worlds — a JVM Scikit-learn, if you will.
Versatile, and aiming for completeness, Gensim implements “[e]fficient multicore implementations of popular algorithms, such as online Latent Semantic Analysis (LSA/LSI/SVD), Latent Dirichlet Allocation (LDA), Random Projections (RP), Hierarchical Dirichlet Process (HDP) or word2vec deep learning.”
In the new study, Weng and his colleagues compared use of the ACC/AHA guidelines with four machine-learning algorithms: random forest, logistic regression, gradient boosting, and neural networks. All four techniques analyze lots of data in order to come up with predictive tools without any human instruction. In this case, the data came from the electronic medical records of 378,256 patients in the United Kingdom. The goal was to find patterns in the records that were associated with cardiovascular events.
First, the artificial intelligence (AI) algorithms had to train themselves. They used about 78% of the data—some 295,267 records—to search for patterns and build their own internal “guidelines.” They then tested themselves on the remaining records. Using record data available in 2005, they predicted which patients would have their first cardiovascular event over the next 10 years, and checked the guesses against the 2015 records. Unlike the ACC/AHA guidelines, the machine-learning methods were allowed to take into account 22 more data points, including ethnicity, arthritis, and kidney disease.
All four AI methods performed significantly better than the ACC/AHA guidelines. Using a statistic called AUC (in which a score of 1.0 signifies 100% accuracy), the ACC/AHA guidelines hit 0.728. The four new methods ranged from 0.745 to 0.764, Weng’s team reports this month in PLOS ONE. The best one—neural networks—correctly predicted 7.6% more events than the ACC/AHA method, and it raised 1.6% fewer false alarms. In the test sample of about 83,000 records, that amounts to 355 additional patients whose lives could have been saved.
The system being developed by the society compiles images of tissue samples from patients at the participating institutions into a database, where AI then decides if the case is cancer or not, learning from its past successes and failures, a method called deep learning.
So far, when testing the ability of AI to distinguish between stomach cancer and a similar-looking benign symptom, AI correctly diagnosed the cancer 70 percent of the time.
How London startup Thread uses artificial intelligence and machine learning to help men buy clothes (Yahoo! News)
“We have a pre-screen step where stylists go through and remove everything from our partners which they don’t want to personally endorse,” O’Neill explained to Business Insider.
Thread’s stylists then look up what you want to buy, and they suggest individual items as well as full outfits. But after that initial human involvement, Thread uses its algorithms, known as “Thimble” internally, to do some of the heavy lifting.
Once a stylist has decided on an olive green T-shirt, for example, the algorithm looks to find the best olive green T-shirt for the customer. O’Neill said that would be “a really hard problem for a human to do” because of the volume of clothes to sort through.
“So that’s a really good place for machine learning where you pull in lots of data from all the different partners you have. We have about 200,000 items from our partners. It’s the combination of curation plus AI which has worked really well for us.”
Computer Vision/Machine Vision
Microsoft Flow adds support for Computer Vision APIs (MS Power User)
Using the Computer Vision APIs, users can create flows that extract rich information from images to categorize and process visual data like the below.
- Describe Image – generates a description of an image in human readable language with complete sentences
- Tag Image – generates a list of words, or tags, that are relevant to the content of the supplied image
- Generate Thumbnail – get a thumbnail image with the user-specified width and height
- Optical Character Recognition (OCR) – detects text in an image and extracts the recognized characters into text you can use in other steps
Soft Robotics debuts new vision system (The Packer)
SuperPick — short for supervisory picking — aims to provide the depth perception and recognition of 3-D using 2-D hardware and human oversight.
“If you have two (cherry tomatoes in baggies) that are sitting right on top of each other, the human eye can very easily see which tomato is in which baggie and how you would pick it up, but a computer vision system can’t, and so we’ve actually introduced a system that has a human in the loop,” said Dan Harburg, director of business development for Soft Robotics. “A human worker that can be remote is actually interacting directly with a screen to help the robot in the circumstances where it can’t figure out what it’s looking at.”
Because line-scan cameras use a single row of pixels, they can build continuous images not limited to a specific vertical resolution, allowing for much higher resolutions than area-scan cameras in both 2D and 3D. Chromasens has been at the forefront of line scan technology for more than a decade. It recently introduced its 3DPIXA HR 2 µm 3D line scan camera that covers a 16mm field of view and achieves 0.5µm height resolution or twice the resolution of previous versions. As a result of this precision, manufacturers of semiconductors and other electronics can improve productivity by preventing defects, reducing waste, and increasing yield while achieving compliance with traceability requirements.
In contrast, area-scan cameras are more general purpose. Used in the majority of machine vision systems, they contain a large matrix of pixels that capture a 2D image of a given scene in one exposure cycle with horizontal and vertical elements, for example in 640 x 480 pixels. While they may offer easier setup and alignment, they are not always effective when an object under inspection is moving or if it cannot be contained in a practical size field of view. Area-scan cameras are best suited towards applications where the object is stationary, even if only momentarily. Uninterrupted capture of continuous materials by an area-scan camera is achieved only by capturing overlapping images. Software must painstakingly crop each individual image, eliminate distortion and assemble the images in the correct sequence.
Details are sparse, but we know the startup has its own module for object recognition and depth perception that can be deployed on robots and drones.