Saturday, 17 August 2019

Electric Boats Could Be Floating Batteries for Island Microgrids

Researchers in Australia have developed a control algorithm that allows electric boats equipped with solar panels to sell power to a microgrid


In developed countries, lights roar to life with the flick of a switch and televisions hum quietly with the touch of a button—given you still have one of those. But on most of Indonesia’s remote islands, accessing electricity is neither simple nor convenient. 

For example—prior to 2018, diesel generators provided residents of East Kalimantan’s Berau district with electricity for just four hours a day. That June, a government-backed organization installed new hybrid microgrids, enabling residents to have electricity all day long, PV magazine reported. These hybrid microgrids were composed of photovoltaic solar panels (PVs) to collect energy and lithium-ion batteries to store it. 

But there may be another way to power remote islands, especially in the aftermath of natural disasters: boats. Yes, boats.

Researchers at the University of New South Wales in Sydney, Australia created an algorithm that can theoretically turn electric boats into small renewable power plants. They tested the algorithm with a microgrid in their lab, using four 6-volt gel batteries connected in a 24-V series as a stand-in for a boat. 

In their experiment, they found that the algorithm could manage power flows reliably enough to allow electric boats to provide peak load support to a grid directly after a trip.  

To implement this approach, they’d need an electric boat with its own PV system, which would charge the boat’s batteries when the boat was adrift. Then when the boat is docked, it could act as a small power plant, providing electricity to homes on the island. 

With the algorithm in place, boat owners could decide when to sell electricity—and how much they wanted to sell. They might, for example, set their system to automatically sell 10 per cent of its stored energy, and only if the batteries are at least halfway charged. 

Boats are uniquely positioned to provide this kind of service, the researchers point out. Electric cars don’t generally have their own PV system. So instead of adding power to the grid-like, a boat could, electric cars draw from it. 

The proposed technology works pretty similarly to the microgrids that are gradually rolling out in Indonesia—those microgrids also contain PVs to collect energy and lithium-ion batteries to store it. But there’s one key difference: portability. 

If Indonesia were hit with a natural disaster, those microgrids could be destroyed. Even Indonesia’s widely electrified islands may be impacted. With the new approach, the Indonesian government could use the boats it sent with food and supplies to also provide power. 

The concept is still in its infancy, but the University of New South Wales team expects to get its algorithm out of the lab and into the ocean by testing it with an actual electric boat in the near future.

Monday, 12 August 2019

Specialized AI Chips Hold Both Promise and Peril for Developers


When it comes to the compute-intensive field of AI, hardware vendors are reviving the performance gains we enjoyed at the height of Moore’s Law. The gains come from a new generation of specialized chips for AI applications like deep learning. But the fragmented microchip marketplace that’s emerging will lead to some hard choices for developers. 

The new era of chip specialization for AI began when graphics processing units (GPUs), which were originally developed for gaming, were deployed for applications like deep learning. The same architecture that made GPUs render realistic images also enabled them to crunch data much more efficiently than central processing units (CPUs). A big step forward happened in 2007 when Nvidia released CUDA, a toolkit for making GPUs programmable in a general-purpose way.

AI researchers need every advantage they can get when dealing with the unprecedented computational requirements of deep learning. GPU processing power has advanced rapidly, and chips originally designed to render images have become the workhorses powering world-changing AI research and development. Many of the linear algebra routines that are necessary to make Fortnite run at 120 frames per second are now powering the neural networks at the heart of cutting-edge applications of computer vision, automated speech recognition, and natural language processing.  

Now, the trend toward microchip specialization is turning into an arms race. Gartner projects that specialized chip sales for AI will double to around the US $8 billion in 2019 and reach more than $34 billion by 2023. Nvidia’s internal projections place the market for data centre GPUs (which are almost solely used to power deep learning) at $50 billion in the same time frame. In the next five years, we’ll see massive investments in custom silicon come to fruition from Amazon, ARM, Apple, IBM, Intel, Google, Microsoft, Nvidia, Qualcomm. There is also a slew of startups in the mix. CrunchBase estimates that AI chip companies, including Cerebras, Graphcore, Groq, Mythic AI, SambaNova Systems, and Wave Computing, have collectively raised more than $1 billion. 

To be clear, specialized AI chips are both important and welcomed, as they’re catalysts for transforming cutting-edge AI research into real-world applications. However, the flood of new AI chips, each one faster and more specialized than the next, will also seem like a throwback to the rise of enterprise software. We can expect cut-throat sales deals and software specialization aimed at locking developers into working with just one vendor. 

Imagine if, 15 years ago, the cloud services AWS, Azure, Box, Dropbox, and GCP all came to market within 12 to 18 months. Their mission would have been to lock in as many businesses as possible—because once you’re on one platform, it’s hard to switch to another. This type of end-user gold rush is about to happen in AI, with tens of billions of dollars, and priceless research, at stake. 

Chipmakers won’t be short on promises, and the benefits will be real. But it’s important for AI developers to understand that new chips that require new architectures could make their products slower to market—even with faster performance. In most cases, AI models are not going to be portable between different chip makers. Developers are well aware of the vendor lock-in risk posed by adopting higher-level cloud APIs, but in the past, the actual compute substrate has been standardized and homogeneous. This situation is going to change dramatically in the world of AI development.

It's quite likely that more than half of the chip industry’s revenue will soon be driven by AI and deep learning applications. Just as software begets more software, AI begets more AI. We’ve seen it many times: Companies initially focus on one problem, but ultimately solve many. For example, major automakers are striving to bring autonomous cars to the road, and their cutting-edge work in deep learning and computer vision is already having a cascading effect; the research is leading to such offshoot projects as Ford’s delivery robots.

As specialized AI chips come to market, the current chip giants and major cloud companies will probably strike exclusive deals or acquire top performing startups. This trend will fragment the AI market rather than unifying it. All that AI developers can do now is understand what’s about to happen and plan how they’ll weigh the benefits of a faster chip with the costs of building on new architectures.

Evan Sparks is CEO of Determined AI. He holds a PhD in computer science from the University of California, Berkeley, where his research focused on distributed systems for data analysis and machine learning.

Sunday, 4 August 2019

Drag-and-drop data analytics

The system lets nonspecialists use machine-learning models to make predictions for medical research, sales, and more.

For years, researchers from MIT and Brown University have been developing an interactive system that lets users drag-and-drop and manipulate data on any touchscreen, including smartphones and interactive whiteboards. Now, they̢۪ve included a tool that instantly and automatically generates machine-learning models to run prediction tasks on that data.

In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.

For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.

In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.

The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.

“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”

VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decide on the best-performing AutoML tool. 

Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.

An “unbounded canvas” for analytics

The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.

Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.


The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.

“It’s like a big, unbounded canvas where you can layout how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”


Approximating AutoML

With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.

Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.

According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.

“Together with my co-authors, I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.

“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”

The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.

Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.

“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, maybe some outliers in the dataset that may indicate a problem.”

Saturday, 3 August 2019

For better deep neural network vision, just add feedback (loops)

The DiCarlo lab finds that a recurrent architecture helps both artificial intelligence and our brains to better identify objects.

Source MIT News

Your ability to recognize objects is remarkable. If you see a cup under unusual lighting or from unexpected directions, there’s a good chance that your brain will still compute that it is a cup. Such precise object recognition is one holy grail for artificial intelligence developers, such as those improving self-driving car navigation.

While modelling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans.

In findings published in Nature Neuroscience, McGovern Institute investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.

Deep convolutional neural networks (DCNN) are currently the most successful models for accurately recognizing objects on a fast timescale (less than 100 milliseconds) and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects. Most DCNNs are simple in comparison to the primate ventral stream, however.

“For a long period of time, we were far from a model-based understanding. Thus our field got started on this quest by modelling visual recognition as a feedforward process,” explains senior author DiCarlo, who is also the head of MIT’s Department of Brain and Cognitive Sciences and research co-leader in the Center for Brains, Minds, and Machines (CBMM). “However, we know there are recurrent anatomical connections in brain regions linked to object recognition.”

Think of feedforward DCNNs, and the portion of the visual system that first attempts to capture objects, as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition. Perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.

Challenging recognition

The authors first needed to identify objects that are trivially decoded by the primate brain but are challenging for artificial systems. Rather than trying to guess why deep learning was having problems recognizing an object (is it due to the clutter in the image? a misleading shadow?), the authors took an unbiased approach that turned out to be critical.

Kar explains further that “we realized that AI models actually don’t have problems with every image where an object is occluded or in clutter. Humans trying to guess why AI models were challenged turned out to be holding us back.”

Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on "challenge images" where the primates could easily recognize the objects in those images, but a feedforward DCNN ran into problems. When they, and others, added appropriate recurrent processing to these DCNNs, object recognition in challenge images suddenly became a breeze.

Processing times

Kar used neural recording methods with very high spatial and temporal precision to determine whether these images were really so trivial for primates. Remarkably, they found that although challenge images had initially appeared to be child’s play to the human brain, they actually involve extra neural processing time (about an additional 30 ms), suggesting that recurrent loops operate in our brain, too.

 “What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections," says Kar.

Diane Beck, professor of psychology and co-chair of the Intelligent Systems Theme at the Beckman Institute and not an author on the study, explains further. “Since entirely feedforward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”

What does this mean for a self-driving car? It shows that deep learning architectures involved in object recognition need recurrent components if they are to match the primate brain, and also indicates how to operationalize this procedure for the next generation of intelligent machines.

“Recurrent models offer predictions of neural activity and behaviour over time," says Kar. “We may now be able to model more involved tasks. Perhaps one day, the systems will not only recognize an object, such as a person but also perform cognitive tasks that the human brain so easily manages, such as understanding the emotions of other people.”

This work was supported by the Office of Naval Research and the Center for Brains, Minds, and Machines through the National Science Foundation.

We are now back in action



Engineering Insights

Get an Insight into the world of engineering. Get to know about the trending topics in the field of engineering.

Pages

Follow Us