Daily News Digest Featured News

Lampix turns any room into smart interactive surface, hedge funds train computers to think like you, ‘universal adversarial perterbation’ alters images so AI can’t recognize them

machine learning choreography

News Summary

Augmented Reality

This Lamp Can Turn Your Room Into An Interactive Smart Surface (GineersNow)

On the surface, Lampix looks like any other ordinary table lamp. But don’t be fooled- this lamp is much more than that. It can turn your entire room into an augmented reality screen with its built-in Raspberry Pi, an 8-megapixel camera, and a 400-lumen projector.

This lamp can connect to any smart device such as smartphones, laptops, and PCs via WiFi, and send anything they want to project onto any flat surface they please. The lamp’s built-in camera can track the movements of your hand or other objects to determine their position, and allows you to interact with the projection.

And finally, on to its most interesting feature, Lampix allows its users to work and interact with physical documents on pieces of paper as if they were digital documents.

Augmented Reality in the classroom: Move over, Pokemon Go, it’s time for science class (Straits Times)

At Metta School in Simei, nine pupils attend an unusual science lesson with a twist.

They use augmented reality to learn about animal adaptations – features such as a camel’s hump and its long eyelashes that help it survive – and are drawn in by the animation and sound effects.

BMW Is Developing Augmented Reality Powered Vehicles & The Future Looks Absolutely Insane (MENSXP)

The video shows that BMW is not only working self-driving cars but also augmented reality helmets that will work in synergy with their innovations. There will be cars that will paint your name on the ground using bright lights that are being projected to everyone near your BMW vehicle.

Virtual posters may one day take over Hong Kong skyscrapers with augmented reality technology (SCMP)

British company Lightvert is currently developing a type of large-scale outdoor advertising technology which does not require a billboard or an LCD television set. The advertisement will not be attached to a building at all, but will be projected through lasers in the air, appearing in such a way that a “virtual” poster is created momentarily. A narrow strip of reflective material will be fixed to the side of the building and a high-speed light scanner will project light off a reflector and towards the viewer.


NASA ISS Astronauts Train in Augmented Reality (Geek.com)

A new mixed-reality simulator lets International Space Station crew members hone their skills on the ground before rocketing into low Earth orbit.

By strapping on a consumer VR headset (in this case, it appears NASA is predominantly using the HTC Vive) and tapping into Unreal Engine’s integrated networking support, trainees can work together from different locations around the globe.

First Magnitude debuts new augmented-reality beer can (alligator)

The cans are equipped with augmented reality features — an industry first, McClung said. Participants can download the free Libraries of Life app, point it at the beer can, see a 3-D rendering of the butterfly and learn more about its biology.

The brewery said it hopes to start a series of butterfly-themed beers, with every can offering augmented reality.

Facebook launches in-app camera effects to take Snapchat’s augmented reality fun to masses (news.com.au)

Along with adding the new in-app camera with zany effects, the updates to the Facebook app include other Snapchat-like features including the ability to share photos and videos directly with specific people, with the content disappearing after it is viewed, and a “Stories” section where content disappears after a day.

Artificial Intelligence/Machine Learning

Can artificial intelligence make you a better tweeter? (recode)

I used artificial intelligence to help me decide what to tweet. More specifically, I used a service called Post Intelligence, which recommended links and photos to post, suggested the time of day I should post to get the best engagement, and even estimated the popularity of my tweets before I sent them based on the language I used in the tweet.

The company analyzed my Twitter account to determine the topics I tweet about most and calculated which of those topics also perform well with my followers. Then the AI went out and found tweets about those topics that were performing well on Twitter and suggested I share them, too.

Post Intelligence

The AI-powered tweets received an average of 7.2 favorites and 2.2 retweets apiece. My “original” tweets received 5.0 faves and 1.5 retweets, on average. On the surface, the AI appeared to be a noticeable help.

I also added 70 new followers in the 9-day stretch; I had averaged just 118 new followers per month in the six months prior.

Hedge Funds Are Training Their Computers to Think Like You (Bloomberg)

Now, after many false dawns, an artificial intelligence technology called deep learning that loosely mimics the neurons in our brains is holding out promise for firms. WorldQuant is using it for small-scale trading, said a person with knowledge of the firm. Man AHL may soon begin betting with it too. Winton and Two Sigma are also getting into the brain game.

Consider a quant searching for factors that might push a stock over a benchmark index. Today an analyst has to manually select factors like price-earnings ratios to test. A quant using deep learning gives the neural network a price target and then feeds the model with raw company and market data. The artificial neurons are math functions that crunch data. As it moves through layers, the neurons self-adjust — or learn — to get closer to the goal: finding factors that predict when the stock will hit the target.

Academic researchers have found that deep learning can generate profits from wagers on corporate events like takeovers. In a 2015 study of the technology’s ability to predict moves of 15 stocks in the S&P 500 Index, more than 10 million events were pulled from financial news as part of the training of the computer system. In simulating trading of the stocks, the model made a cumulative profit, according to researchers led by Xiao Ding of the Harbin Institute of Technology’s Research Center for Social Computing and Information Retrieval in China.

deep learning neural networks

A New Direction for Artificial Intelligence? (Technology Review)

In a blog post describing the work, Sutskever and colleagues describe using “evolutionary strategies” to have machines figure out for themselves how to solve a complex task. The researchers say the approach is distantly related to a decades-old approach that involves optimizing algorithms using a process of simulated evolution. It essentially lets a machine work out, using experimentation and optimization, the best algorithm for solving a complex problem, and it could have applications in robotics, automated driving, and other areas.

Unlike reinforcement learning, evolutionary strategies allow machines to learn while using much less computation. Reinforcement learning typically requires a technique known as backpropagation, which optimizes a neural network as errors are minimized. Evolutionary strategies involve a much simpler optimization technique.

IBM Employs Machine Learning to Combat Phishing Attacks (IT Business Edge)

IBM today unveiled a cognitive engine that employs machine learning algorithms to identify fake Web pages linked to phishing attacks 250 times faster than a security professional can do on their own. Once identified, that information is then fed into the IBM Trusteer fraud prevention suite of software that can be used to block end users from ever visiting those pages.

A full 70 percent of credentials are stolen in first hour of a phishing attack. Within four hours into that that number rises to 80 percent. Trying to identify all those Web pages in the first few hours of attack is not possible without some help in the form of machine learning algorithms, says Kessem.

Machine-Learning Algorithm Watches Dance Dance Revolution, Then Creates Dances of Its Own (MIT Technology Review)

Their system—called Dance Dance Convolution—takes as an input the raw audio files of pop songs and produces dance routines as an output. The result is a machine that can choreograph music.

The task of automating the creation of dance charts is by no means simple. Donahue and co divide it into two parts. The first is deciding when to place steps, and the second is deciding which steps to select. They then train a machine-learning algorithm to learn each task.

The first task boils down to identifying a set of timestamps in a song at which to place steps. This is similar to a well-studied problem in music research called onset detection. This involves identifying important moments in a song such as melody notes or drum strikes.

machine learning choreography

Welcome To The Era Of Intelligent Cloud Powered By Machine Learning (Forbes)

The cloud in its latest avatar is emerging as a data-centric, intelligent platform ready to deal with the next generation of applications and workloads.

The next generation cloud powered by Machine Learning will offer services for building applications based on cognitive computing, predictive analytics, intelligent Internet of Things, interactive personal assistants and bots. The APIs exposed by these services will democratize Machine Learning and Artificial Intelligence by empowering developers. By consuming a simple set of APIs, developers will be able to build highly sophisticated applications driven by intuitive user experiences.

Researchers use machine learning to help diagnose depression (UPI)

A new study from the University of Texas suggests machine learning with a supercomputer may help identify people susceptible to developing depression.

Schnyer is using the Stampede supercomputer at the Texas Advanced Computing Center, or TACC, to train a machine learning algorithm that can identify similarities among hundreds of patients using magnetic resonance imaging, or MRI, genomics data and other factors to predict patients at risk for depression and anxiety.

Schyner and his colleagues used machine learning to classify people with major depressive disorder with nearly 75 percent accuracy.

Participants underwent diffusion tensor imaging MRI scans, which tag water molecules to determine the level to which those molecules are microscopically diffused in the brain over time. This diffusion measured in multiple spatial directions generates vectors for each voxel, 3D cubes that represent either structure or neural activity throughout the brain. The measurements are then translated into metrics that indicate the integrity of white matter pathways within the cerebral cortex.

Cerber learns to dodge machine learning (SC Magazine)

Trend Micro researchers have spotted Cerber using new trick to evade machine learning making it harder to detect.

Researchers said the malware was repackaged into a self-extracting file and can cause problems for threat detection methods that analyze a file without any execution or emulation because all self-extracting files, and simple straightforward files, may look similar by structure, regardless of the content.

This fintech startup uses machine learning to give international students credit cards (The Next Web)

SelfScore is an innovative fintech startup with an goal to change that. Where the traditional incumbents fail, it uses machine learning to determine the creditworthiness of these new (and temporary) arrivals, in order to offer them a credit card they can afford.

Artificial intelligence-based algorithm screens for diabetic retinopathy (Healio)

Deep learning methods were used to process 75,137 color fundus images of patients with diabetes obtained from the EyePACS public data set. The artificial intelligence model was trained and tested to detect and differentiate the features of healthy eye fundi from eye fundi with DR.

A tree-based classification model labeled the eyes with no DR as 0 and the eyes with DR of any severity as 1; the eyes labeled as 1 were referred for further care. The performance of the algorithm was evaluated against two other public databases: Messidor-2, containing 1,748 fundus images from four French institutions, and e-ophtha of the French Research Agency, containing 463 images.

RightHand Robotics Unveils New Picking Solution for Logistics, E-Commerce, and Material Handling Industries (Yahoo! Finance)

With the explosive growth in e-commerce and a shrinking workforce, pressures have never been higher on warehouses to fulfill orders faster and more efficiently. To address these challenges, RightHand Robotics today introduced RightPick, a combined hardware and software solution that handles the key task of picking individual items, or “piece-picking.”

The core competency of the RightPick solution is picking “pieces,” individual items. As e-commerce continues to grow, the trend away from bulk or pallet-load handling toward single SKUs and piecemeal items expands along with it.

Unlike traditional factory robots, RightPick handles thousands of different items using a machine learning backend coupled with a sensorized robot hand that works in concert with all industry-leading robotic arms.

Computer Vision/Machine Vision

Matroid can watch videos and detect anything within them (TechCrunch)

Matroid, a computer vision startup launching out of stealth today, enables anyone to take advantage of the information inherently embedded in video. You can build your own detector within the company’s intuitive, non-technical, web platform to detect people and most other objects.

Instead of whipping out TensorFlow or Google Cloud’s new Video Intelligence API, users simply upload a custom training set or choose from a curated library of hundreds of millions of images to establish their detector. Matroid can handle images and video clips during the training process. It uses multiple neural networks to process different types of inputs. When you add a video, you’ll be prompted to place bounding boxes over the important objects in the scene that will be used for training.

Himax Advances Eye Safety for Compact Computer Vision Devices with UltraSenseIR™ 1/6.5-inch HD Image Sensor (Yahoo! Finance)

Himax Imaging, Inc., a subsidiary of Himax Technologies, Inc. (HIMX) (“Himax” or “Company”), a leading supplier and fabless manufacturer of display drivers and other semiconductor products, today announced the launch of the UltraSenseIR™ HM1062 HD sensor that delivers 44% quantum efficiency in the Near Infrared (“NIR”) spectrum, enabling a wide range of eye-safe, computer vision applications for compact devices such as front facing cameras for smartphones, notebooks, wearable devices, drones and other embedded devices.

The HM1062 operates up to 60 frames per second in 720p HD resolution, and up to 120 frames per second in binning or sub-sampling mode over industry compliant MIPI CSI2 interface. The sensor supports multi-camera synchronization and can be programmed using a standard two wire serial interface. The HM1062 is currently sampling and scheduled for mass production by the second quarter of 2017.

The “universal adversarial preturbation” undetectably alters images so AI can’t recognize them (BoingBoing)

In a newly revised paper in Computer Vision and Pattern Recognition, a group of French and Swiss computer science researchers show that “a very small perturbation vector that causes natural images to be misclassified with high probability” — that is, a minor image transformation can beat machine learning systems nearly every time.

Lightform Turns Any Projector Into A 3D Scanning, Augmented Reality Device (Yahoo! Finance)

Lightform is a computer and 3D scanning device that when connected to any video projector, lets you quickly scan complex scenes and turn any object into a screen. It’s augmented reality without the headset.

The core technology is known as projection mapping and is a form of augmented reality. The Lightform team combines years of projection mapping experience ranging from large scale entertainment experiences to PhD research experiments.

Lightform uses advanced computer vision to eliminate complexity in the projection mapping process.


Allen Taylor

About the author

Allen Taylor

An award-winning journalist and former newspaper editor, I currently work as a freelance writer/editor through Taylored Content. In addition to editing VisionAR, I edit the daily news digest at Lending-Times. I also serve the FinTech and AR/AI industries with authoritative content in the form of white papers, case studies, blog posts, and other content designed to position innovators as experts in their niches. Also, a published poet and fiction writer.

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *