Sunday, 25 November 2018

Analog Electronics | Behzad Razavi


Get an insight into the world of Digital electronics.These video classes have been developed by Behzad Razavi who is a pioneer in the field of electronics. His lectures are considered as a an encyclopedia for those who love electronics. He is also an author of many Academic publications. Happy learning.







Analog Electronics TS Series



Get an insight into the world of Analog electronics. Here Engineering Insights introduces you to various learning materials prepared by different individuals/institutions/organizations. In the day of big data, we are ready to help you by rating the materials based on different criterias which a learner always looks for through advanced machine learning techniques. Since we are in budding stage kindly bare the mistakes. Happy learning.



Analog 
Electronics 
by Behzad Razavi
Beginner 8|
Optimal  7|
Detailed  6|
Overall    8|
Basics Bipolar MOS

Analog electronics are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels.Learn Now

Digital Electronics TS Series



Get an insight into the world of electronics. Here Engineering Insights introduces you to various learning materials prepared by different individuals/institutions/organizations. In the day of big data, we are ready to help you by rating the materials based on different criterias which a learner always looks for through advanced machine learning techniques. Since we are in budding stage kindly bare the mistakes. Happy learning.



Digital 
Electronics 
by Tutorials Point
Beginner 8|
Optimal  7|
Crash  6|
Overall    8|
Number Systems Conversions

Digital electronics are electronics that operate on digital signals. In contrast, analog circuits manipulate analog signals whose performance is more subject to manufacturing tolerance, signal attenuation and noiseLearn Now

Digital Electronics | TutorialSpot

Get an insight into the world of Digital electronics.These video classes have been developed by Tutorials Point based on the latest GATE syllabus and will be useful for Electronics Engineering students as well as for GATE, IES and other PSU exams preparation. Happy learning.








Wednesday, 21 November 2018

Electronics TutorialSpot


Get an insight into the world of electronics. Here Engineering Insights introduces you to various learning materials prepared by different individuals/institutions/organizations. In the day of big data, we are ready to help you by rating the materials based on different criterias which a learner always looks for through advanced machine learning techniques. Since we are in budding stage kindly bare the mistakes. Happy learning.


Analog Electronics


Analog electronics are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels.
Learn Now

Basic Electronics

Digital electronics are electronics that operate on digital signals. In contrast, analog circuits manipulate analog signals whose performance is more subject to manufacturing tolerance, signal attenuation and noiseLearn Now

Digital System Design

The goal is to provide some basic information about electronic circuits. We make the assumption that you have no prior knowledge of electronics, electricity, or circuits, and start from the basicsLearn Now

Digital Electronics

Digital electronics are electronics that operate on digital signals. In contrast, analog circuits manipulate analog signals whose performance is more subject to manufacturing tolerance, signal attenuation and noise. Learn Now

One of the Fathers of AI Is Worried About Its Future






Yoshua Bengio wants to stop talk of an AI arms race and make the technology more accessible to the developing world.

Yoshua Bengio is a grandmaster of modern artificial intelligence.

Alongside Geoff Hinton and Yan LeCun, Bengio is famous for championing a technique known as deep learning that in recent years has gone from an academic curiosity to one of the most powerful technologies on the planet.

Deep learning involves feeding data to large, crudely-simulated neural networks, and it has proven incredibly powerful and effective for all sorts of practical tasks, from voice recognition and image classification to controlling self-driving cars and automating business decisions.

Bengio has resisted the lure of any big tech company. While Hinton and LeCun joined Google and Facebook respectively, he remains a full-time professor at the University of Montreal. (He did, however, cofound Element AI in 2016, a company that built a very successful business helping big companies explore the commercial applications of AI research.)

Bengio met with MIT Technology Review’s senior editor for AI, Will Knight, at an MIT event recently.

What do you make of the idea that there’s an AI race between different countries?

I don’t like it. I don’t think it’s the right way to do it.

We could collectively participate in a race, but as a scientist and somebody who wants to think about the common good, I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the wellbeing of as many people as possible.

Are there ways to foster more collaboration between countries?

We could make it easier for people from developing countries to come to here. It is a big problem right now. In Europe or the US or Canada it is very difficult for an African researcher to get a visa. It’s a lottery, and very often they will use any excuse to refuse access. This is totally unfair. It is already hard for them to do research with little resources, but in addition if they can’t have access to the community, I think that’s really unfair. As a way to counter some of that, we are going to have the ICLR conference [a major AI conference] in 2020 in Africa.

Inclusivity has to be more than a word we say to look good. The potential for AI to be useful in the developing world is even greater. They need to improve technology even more than we do, and they have different needs.

Are you worried about just a few AI companies, in the West and perhaps China, dominating the field of AI?

Yes, it’s another reason why we need to have more democracy in AI research. It’s that AI research by itself will tend to lead to concentrations of power, money, and researchers. The best students want to go to the best companies. They have much more money, they have much more data. And this is not healthy. Even in a democracy, it’s dangerous to have too much power concentrated in a few hands.

There has been a lot of controversy over military uses of AI. Where do you stand on that?

I stand very firmly against.

Even non-lethal uses of AI?

Well, I don’t want to prevent that. I think we need to make it immoral to have killer robots. We need to change the culture, and that includes changing laws and treaties. That can go a long way.

Of course, you’ll never completely prevent it, and people say, “some rogue country will develop these things.” My answer is that one, we want to make them feel guilty for doing it, and two, there’s nothing to stop us from building defensive technology. There’s a big difference between defensive weapons that will kill off drones, and offensive weapons that are targeting humans. Both can use AI.

Shouldn’t AI experts work with the military to ensure this happens?

If they had the right moral values, fine. But I don’t completely trust military organizations because they tend to put duty before morality. I wish it was different.

What are you most excited about in terms of new AI research?

I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

If we really want to approach human-level AI, it’s another ballgame. We need long-term investments and I think academia is the best place to carry that torch.

You mention causality — in other words grasping not just patterns in data by why something happens. Why is that important, and why is it so hard?

If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models.

We can hand-craft them but that’s not enough. We need machines that can discover causal models. To some extend it’s never going to be perfect. We don’t have a perfect causal model of the reality, that’s why we make a lot of mistakes. But we are much better off at doing this than other animals.

Right now, we don’t really have good algorithms for this, but I think if enough people work at it and consider it important, we will make advances.

Source: MIT Technology Review

Tuesday, 20 November 2018

Cybersecurity Companies Will Soon Have Millions of Jobs They Can’t Fill. Here’s the Tactic They’re Using to Close the Talent Gap

Cybersecurity is the latest of the non-traditional industries turning to apprenticeships to recruit talent

While many employers now are grappling with the tight job market and historically low unemployment rate, one industry in particular is facing a severe hiring shortage. Cybersecurity companies have struggled to find enough good, qualified hires for decades and the situation is only getting worse: By 2020, the industry will have more than 1.5 million unfulfilled positions, according to Harvard Business Review.

“We have a critical shortage of skilled cyber employees because everything we do is now connected to the Internet,” says Leigh Armistead, president of Hampton, Virginia-based cybersecurity company Peregrine Technical Solutions.

As a result, some cybersecurity companies are trying to groom the next crop of employees by borrowing a training tactic more common to blue-collar industries like construction and manufacturing: apprenticeships.

Washington has been pushing the apprenticeship model in recent years as a way to close the skills gap in a number of industries. Both the Obama and Trump administrations have made the programs a “pillar of the workforce training strategy,” says Tamar Jacoby, president of nonprofit organizations ImmigrationWorks USA and think tank Opportunity America. Obama administration directed $265 million toward expanding apprenticeships while the Trump administration so far has authorized $150 million in funding to encourage more industry-recognized programs. The funding has helped boost the number of registered apprentices to half a million in 2017, up 42 percent from 2013, and in non-traditional fields like insurance, nursing, and finance.

Cybersecurity companies have embraced apprenticeships in part because the training is especially effective in industries where best practices quickly become outdated. The trainees learn, among other tasks, how to manage and defend a network from various security threats, which are constantly changing. Combining both classroom knowledge and practical skills from on-the-job learning is what Jacoby calls the “gold standard” of training.

As such, it’s no small investment: a single certified apprentice can run a company $25,000 to $30,000 dollars a year, which includes the college courses and a wage.

But companies say the investment is worth it because of the employee loyalty it fosters. “They’ve been part of your culture for two to three years,” says Armistead. “The idea is they’re going to stay.”

That’s what Keith Kregg, vice president at Innovative Systems in Raleigh, North Carolina, is betting. He says competitors have been “cherry picking” the company’s employees as soon as six months after they’re hired. As a way to preempt the job hopping, six years ago the company decided to develop an apprenticeship program to help grow talent in-house. The first cohort with five apprentices launched in 2015; this year all five will receive full-time offers with salaries upwards of $80,000. (Entry-level salaries without the training range from $60,000 to $65,000.)

In Mount Pleasant, South Carolina, Girish Seshagiri, vice president of ISPHI, which provides cyber services for the government, has found a way to make apprenticeships more affordable at scale.

In 2013, Seshagiri partnered with two other small companies — a credit union and manufacturer — and community colleges in Illinois to develop a joint apprenticeship program with a curriculum modeled after Carnegie Mellon’s. (ISHPI’s software team is based in Peoria, Illinois.) The group launched the first cohort two years later with seven apprentices. The model “will be applicable for the smaller employers, so they can come together,” he says. “None of us is big enough to have [our own] class of 15.”

The model is cost-effective: ISHPI’s apprentices earn $12.50 per hour plus receive mentorship and lodging. The companies collectively decide on college curriculum, wages, competency standards, and how to interface with government agencies.

In general, apprenticeships are “a big undertaking for the employer,” says Jacoby. In addition to the costs and logistics, white-collar employers and potential apprentices often also have to overcome the perception that apprenticeships lead to blue-collar occupations, she says.

Still, if the talent gap continues to widen, these programs are going to become more appealing. “Many employers are seriously worrying where the next workers are going to come from,” she says. “If your job has some skill attached to it, you’re thinking about who’s going to train them — so it’s possible that more white-collar companies will go there.”

Source: Inc.

Explaining the plummeting cost of solar power

Researchers uncover the factors that have caused photovoltaic module costs to drop by 99 percent.


Photos show a solar installation from 1988 (left) and a present-day version. Though the basic underlying technology is the same, a variety of factors have contributed to a hundredfold decline in costs. Now, researchers have identified the relative importance of these different factors.

The dramatic drop in the cost of solar photovoltaic (PV) modules, which has fallen by 99 percent over the last four decades, is often touted as a major success story for renewable energy technology. But one question has never been fully addressed: What exactly accounts for that stunning drop?

A new analysis by MIT researchers has pinpointed what caused the savings, including the policies and technology changes that mattered most. For example, they found that government policy to help grow markets around the world played a critical role in reducing this technology’s costs. At the device level, the dominant factor was an increase in “conversion efficiency,” or the amount of power generated from a given amount of sunlight.

The insights can help to inform future policies and evaluate whether similar improvements can be achieved in other technologies. The findings are being reported today in the journal Energy Policy, in a paper by MIT Associate Professor Jessika Trancik, postdoc Goksin Kavlak, and research scientist James McNerney.

The team looked at the technology-level (“low-level”) factors that have affected cost by changing the modules and manufacturing process. Solar cell technology has improved greatly; for example, the cells have become much more efficient at converting sunlight to electricity. Factors like this, Trancik explains, fall in a category of low-level mechanisms that deal with the physical products themselves.

The team also estimated the cost impacts of “high-level” mechanisms, including learning by doing, research and development, and economies of scale. Examples include the way improved production processes have cut the number of defective cells produced and thus improved yields, and the fact that much larger factories have led to significant economies of scale.

The study, which covered the years 1980 to 2012 (during which module costs fell by 97 percent), found that there were six low-level factors that accounted for more than 10 percent each of the overall drop in costs, and four of those factors accounted for at least 15 percent each. The results point to “the importance of having many different ‘knobs’ to turn, to achieve a steady decline in cost,” Trancik says. The more different opportunities there are to reduce costs, the less likely it is that they will be exhausted quickly.

The relative importance of the factors has changed over time, the study shows. In earlier years, research and development was the dominant cost-reducing high-level mechanism, through improvements to the devices themselves and to manufacturing methods. For about the last decade, however, the largest single high-level factor in the continuing cost decline has been economies of scale, as solar-cell and module manufacturing plants have become ever larger.

“This raises the question of which factors can help continue the cost decline,” Trancik says. “What are the limits to the size of the plants?”

In terms of government policy, Trancik says, policies that stimulated market growth accounted for about 60 percent of the overall cost decline, so “that played an important part in reducing costs.” Policies stimulating market growth globally included measures such as renewable portfolio standards, feed-in tariffs, and a variety of subsidies. Government-funded research and development in various nations accounted for the other 40 percent — although public R&D played a larger part in the earlier years, she says.

This is important information, she adds, because “for a long time there has been a debate about whether these policies work — were they really driving technological improvement? Now, we can not only answer that question, we can say by how much.”

This finding, which is based on modeling device-level mechanisms rather than purely correlational analysis, provides strong evidence of a “virtuous cycle” that can be created between technology innovation and policies to reduce emissions, Trancik says. As emissions policies are implemented, low-carbon technology markets grow, technologies improve, and the costs of future emissions reductions can decline. “This analysis helps us understand why this happens, and how strong the feedbacks can be.”

Trancik and her co-workers plan to apply similar methodology to analyzing other technologies, such as nuclear power, as well as the other parts of solar installations — the so-called balance of systems, including the mounting structures and power controllers needed for the solar modules — which were not included in this study. “The method we developed can be used as a tool to assess costs of different technologies, both retrospectively and prospectively,” Kavlak says.

“This opens up a different way of modeling technological change, from the device level all the way up to policy measures, and everything in between,” Trancik says. “We’re opening up the black box of technological innovation.”

“Going forward, we can improve our intuition about what factors in general make technologies improve quickly. The application of this tool to solar PV is just the beginning of what we can do,” McNerney says.

While the study focused on past performance, the factors it identified suggest that “it does look like there are opportunities for further cost improvements with this technology.” The findings also suggest that researchers should continue working on alternative technologies to crystalline silicon, which is the dominant form of solar photovoltaic technology today, but many other varieties are being actively explored with potentially higher efficiencies or lower materials costs.

The study also highlights the importance of continuing the progress in improving the efficiency of the manufacturing systems, whose role in driving down costs has been important. “There are likely more gains to be had in this direction,” Trancik says.

Gregory Nemet, a professor of public affairs at the University of Wisconsin at Madison, who was not involved in the study, says, “This work is important in that it identifies that the growth in demand for solar PV in the past 15 years was the most important driver of the astounding cost reductions over that period. Policies in Japan, Germany, Spain, California, and China drove the growth of the market and created opportunities for automation, scale, and learning by doing.”

Nemet adds, “Their model is simple and general, which could make it useful for designing policies for other technologies that will be needed to address climate change and other energy-related problems.”

The research was supported by the U.S. Department of Energy.

Source MIT News

Friday, 16 November 2018

Why Noviosense's In-Eye Glucose Monitor Might Work Better Than Google's

Noviosense takes a different approach to glucose sensing in tears, with promising results.


For decades researchers have clamoured to build a wearable, noninvasive glucose monitor. Such a device could help the millions of people living with diabetes track their glucose levels more closely, without the pain of pricking their skin with needles.

Scientists have tried tracking glucose in sweat, saliva, breath and urine. They’ve tried to measure it in blood from outside the skin using spectroscopy. And lately, several groups, including Google, have proposed measuring glucose in tears using smart contact lenses.

But so far, no one has succeeded. The quest for a noninvasive, wearable glucose monitor has mostly left in its wake a trail of dead companies and jaded researchers.

Now, there’s a glimmer of hope. Last month, a start-up company in The Netherlands called Noviosense, quietly published data on human testing of its tear glucose sensor. The study was small—only six participants—but the results, published October 12 in the journal Biomacromolecules, look promising.

“These are the best results I have seen yet” on tear glucose, says John L. Smith, a former executive for blood glucose meter maker LifeScan, who has devoted the latter part of his career to evaluating noninvasive glucose sensing technologies as a consultant. “But substantial improvement is still needed for it to be good enough for monitoring.” 

Coming from Smith, that is high praise. Smith is a well-known sceptic of wearable, needle-free glucose monitors. In fact, in the 2018 edition of his book The Pursuit of Noninvasive Glucose: Hunting the Deceitful Turkey, Smith all but said the technology will never come to fruition: “this [book] may be the final update this subject needs.” He continued: “[M]any participants and observers are beginning to feel this is an idea whose time never came and which may soon be gone without ever seeing success.” 

It helps that Noviosense has taken a different approach to measure glucose in tears. Instead of putting a sensor in a contact lens, Noviosense’s device is placed by the user under the lower eyelid. There, it accesses a reliable flow of tears that better reflect the true levels of glucose in the blood, says Christopher Wilson, founder and CEO of Noviosense. Plus, the design doesn’t dry the eye or impede vision, like contacts can, Wilson says.

Noviosense’s flexible, spring-shaped coil drops in behind the lower eyelid—something users can do themselves—and naturally stays put. The spring-shaped electrodes are coated in a biopolymer that contains an immobilized enzyme that, when exposed to glucose, starts a chemical reaction. That reaction results in the oxidation of hydrogen peroxide, which can be detected by the electrodes using a chronoamperometric measurement. 

In Noviosense’s final design, the device will wirelessly transmit glucose data to a phone when it is held near the eye, or, for continuous measurement, to a pair of eyeglasses, Wilson says. For their first human study, the company used wired sensors, because the company’s wireless components are still in the process of being shrunk to fit the sensor tip, and the wired version is needed to develop the device’s calibration algorithms, Wilson says.

The company compared its device to needle-based glucose monitors that measure the analyte in both blood (the gold standard) and interstitial fluid (a close second in accuracy). The results: 95 per cent of the data points from tear glucose measurements were either the same as blood or fell within an acceptable range of error. The tear data wasn’t as good as blood, but it was about the same as interstitial fluid. 

That beats previous studies by a lot. “There are more than a dozen literature reports that attempt to show a correlation between blood and tear glucose and most of the results range from essentially no correlation to perhaps a 60-70 per cent correlation,” which is just not good enough, says Smith. 

The problem with many of those previous studies lies in the method of tear collection, Smith says. Researchers have tried things like capillary tubes and filter paper. They’ve also tried to mechanically or chemically stimulate tear production to get bigger samples. But these methods mess with natural tear flow, and, thus, the ratio of glucose. 

Rather than stimulating tears or disrupting them in any way, Noviosense’s device acts as a flow cell under the eye lid, and accesses what Wilson calls “basal tears.” These tears are produced at a constant rate, and are “not a reaction to emotion or wind or a foreign body or rubbing.”

Despite the fact that the device is two centimetres long, Wilson says it’s comfortable, that it doesn’t pop out with eye rubbing, and that he and other Noviosense employees have worn it at length. 
Perhaps the more obvious place to put a glucose sensor is in a contact lens since a large percentage of people already wear them. Indeed, many groups have proposed some clever engineering for this. For example, researchers at the Ulsan National Institute of Science and Technology, in South Korea, earlier this year reported a stretchable contact that monitors glucose without distorting the wearer’s vision. 
Even Google, through its life sciences offshoot Verily, jumped in. In 2014 the company announced it was developing a smart contact lens that would monitor glucose in tears. But Verily, and its partner Alcon, a division of Novartis, have since gone quiet on the matter. 
Contact lenses present their own set of challenges. They tend to break up the lipid bilayer of the eye, making it dry, Wilson says. Fluid can also pool up behind contact lenses, rather than allowing a fresh supply to constantly pass through. These factors create errors in tear glucose measurements—or at least that’s Wilson’s hypothesis based on the literature and his conversations with contact lens experts
Whatever the reasons, none of the groups developing smart contact lenses has reported reliable glucose results in humans, says Smith, and “many researchers think that the Verily lens has not been successful,” he says. 
Neither Verily nor Alcon responded to Spectrum’s requests for updates on their smart contact lens project. 
Noviosense has already begun its next clinical trial, which involves 24 additional people, all with Type 1 diabetes. Each person will wear the in-eye device for about half a day, Wilson says. The goal will be to determine, in a larger patient population, whether the device is as accurate as needle-based devices on the market that measures glucose in interstitial fluid.  
Glucose measurements taken from the interstitial fluid aren’t as accurate as those from the blood. But they are good enough, and both U.S. and European regulators have approved such devices for sale on the market from companies such as AbbottDexcom and Medtronic.
Their patch-like devices, called CGMs, or continuous glucose monitors, are wearable, and adhere to the skin, typically on the abdomen or the back of the arm. The electronic patch injects a small needle just under the skin where it tests for glucose in interstitial fluid using an electrochemical analysis similar to that in Noviosense’s device. The patch can be worn for up to two weeks, depending on the model. 
Noviosense compared the performance of a CGM on the market made by Abbott with its own in-eye device. Using a statistical analysis called median absolute relative difference (MedARD), Noviosense’s device scored 12.5 percent—on par with Abbott’s CGM, according to Noviosense’s report. The company hopes the data from its next 24 subjects will also match the accuracy of Abbott’s device. 
Content Credits: IEEE Spectrum

Wednesday, 14 November 2018

A novel way to advance a better battery design

Led by “Queen of Batteries” Christina Lampe-Onnerud, Cadenza Innovation is licensing its lithium-ion battery cell architecture to manufacturers around the world. Cadenza Innovation Founder and CEO Christina Lampe-Onnerud at the World Economic Forum’s Annual Meeting of the New Champions.
Cadenza Innovation has developed a new design that improves the performance, cost, and safety of large lithium-ion batteries. Now, with an unusual strategy for disseminating that technology, the company is poised to have an impact in industries including energy grid storage, industrial machines, and electric vehicles.

Rather than produce the batteries itself, Cadenza licenses its technology to manufacturers producing batteries for diverse applications. The company also works with licensees to both optimize their manufacturing processes and sell the new batteries to end users. The strategy ensures that the four-year-old company’s technology is deployed more quickly and widely than would otherwise be possible.

For Cadenza founder Christina Lampe-Onnerud, a former MIT postdoc and a battery industry veteran of more than 20 years, the goal is to help advance the industry just as the global demand for batteries reaches an inflection point.

“The crazy idea at the time [of the company’s founding] was to see if there was a different way to engage with the industry and help it accept a new technology in existing applications like cars or computers,” Lampe-Onnerud says. “Our thought was, if we really want to have an impact, we could inspire the industry to use existing capital deployed to get a better technology into the market globally and be a positive part of the climate change arena.”

With that lofty goal in mind, the Connecticut-based company has secured partnerships with organizations at every level of the battery supply chain, including suppliers of industrial minerals, original equipment manufacturers, and end users. Cadenza has demonstrated its proprietary “supercell” battery architecture in Fiat’s 500e car model and is in the process of completing a demonstration energy storage system to be used by the New York Power Authority, the largest state public utility company in the U.S., when energy demand is at its peak.

The company’s most significant partnership to date, however, was announced in September with Shenzen BAK Battery Company, one of the world’s largest lithium-ion battery manufacturers. The companies announced BAK would begin mass producing batteries based on Cadenza’s supercell architecture in the first half of 2019.

The supercell architecture

Lampe-Onnerud’s extensive contacts in the lithium-ion battery space and a world-renown technical team have quickened the pace of Cadenza’s rise, but the underlying driver of the company’s success is simple economics: Its technology has been shown to offer manufacturers increased energy density in battery cells while reducing production costs.

The majority of rechargeable lithium ion batteries are powered by cylindrical sheets of metal known as “jelly rolls.” For use in big batteries, jelly rolls can be made either large, to limit the total cost of the battery assembly, or small, to leverage a more efficient cell design that brings higher energy density. Many electric vehicle (EV) companies use large jelly rolls to avoid the durability and safety concerns that come with tightly packing small jelly rolls into a battery, which can lead to the failure of the entire battery if one jelly roll overheats.

Tesla famously achieves longer vehicle ranges by using small jelly rolls in its batteries, addressing safety issues with cooling tubes, intricate circuitry, and by spacing out each roll. But Cadenza has patented a simpler battery system it calls the “supercell,” that allows small jelly rolls to be tightly packed together into one module.

The key to the supercell is a noncombustible ceramic fiber material that each jelly roll sits in like an egg in a carton. The material helps to control temperature throughout the cell and isolate damage caused by an overheated jelly roll. A metal shunt wrapped around each jelly roll and a flame retardant layer of the supercell wall that relieves pressure in the case of a thermal event add to its safety advantages.

The enhanced safety allows Cadenza to package the jelly rolls tightly for greater energy density, and the supercell’s straightforward design, which leverages many parts that are currently manufactured at low costs and high volumes, keeps production costs down. Finally, each supercell module is designed to click together like LEGO blocks, making it possible for manufacturers to easily scale their battery sizes to fit customer needs.

Cadenza’s safety, cost, and performance features were validated during a grant program with the Advanced Research Projects Agency-Energy (ARPA-E), which gave the company nearly $4 million to test the architecture beginning in 2013.

When the supercell architecture was publicly unveiled in 2016, Lampe-Onnerud made headlines by saying it could be used to boost the range of Tesla’s cars by 70 per cent. Now the goal is to get manufacturers to adopt the architecture.

“There will be many winners using this technology,” Lampe-Onnerud says. “We know we can deliver on the [safety, performance, and cost] claims. It’s going to be up to the licensee to decide how they leverage these advantages.”

At MIT, where “data gets to speak”

Lampe-Onnerud and her husband, Per Onnerud, who serves as Cadenza’s chief technology officer, held postdoctoral appointments at MIT after earning their PhDs at Uppsala University in their home country of Sweden. Lampe-Onnerud did lab work in inorganic chemistry in close collaboration with MIT materials science and mathematics professors, while Onnerud did research in the Department of Materials Science and Engineering. The experience left a strong impression on Lampe-Onnerud.

“MIT was a very formative experience,” she says. “You learn how to argue a point so that the data gets to speak. You just enable the data; there’s no spin. MIT has a special place in my heart.”

Lampe-Onnerud has maintained a strong connection with the Institute ever since, participating in alumni groups, giving guest lectures on campus, and serving as a member of the MIT Corporation visiting committee for the chemistry department — all while finding remarkable success in her career.

Lampe-Onnerud founded Boston-Power in 2004, which she grew into an internationally recognized manufacturer of batteries for consumer electronics, vehicles, and industrial applications while serving as the CEO until the company moved operations to China in 2012. In the early stages of the company, more than seven years after Lampe-Onnerud had finished her postdoc work, she discovered the enduring nature of support from the MIT community.

“We started looking for some angel investors, and one of the first groups that responded were the angels affiliated with MIT,” Lampe-Onnerud says. “We support each other because we tend to be attracted to intractable problems. It’s very much in the MIT spirit: We know, if we’re trying to solve big problems, it’s going to be difficult. So we like to collaborate.”

The high-profile experience at Boston Power earned her distinctions including the Technology Pioneer Award from the World Economic Forum, and Swedish Woman of the Year from the Swedish Women’s Educational Association. It also led some to deem her the “Queen of Batteries.”

Immediately after leaving Boston-Power, Lampe-Onnerud and her husband went to work on what would be Cadenza’s supercell architecture in their garage. They wanted to create a solution that would help lower the world’s carbon footprint, but they estimated that, at most, they’d be able to build one gigafactory every 18 months if they were to manufacture the batteries themselves. So they decided to license the technology instead.

The strategy has tradeoffs from a business perspective: Cadenza has needed to raise much less capital than Boston-Power but will allow licensees to generate top line and bottom-line growth while it receives a percentage of sales. Lampe-Onnerud is clearly happy to leverage her global network and share the upside to maximize Cadenza’s impact.

“My hope is that we are able to bring people together around this technology to do things that are really important, like taking down our carbon footprint, eliminating NOx [nitrogen oxide] emissions, or improving grid efficiency,” Lampe-Onnerud says. “It’s a different way to work together, so when an element of this ecosystem wins, we all win. It has been an inspiring process.”

Saturday, 10 November 2018

Google Doodle Celebrates Elisa Leonida Zamfirescu's 131st Birthday


Saturday's Google Doodle pays tribute to Elisa Leonida Zamfirescu, a pioneering Romanian engineer who would have turned 131 on that day. Zamfirescu, born on 10 November 1887, made history as one of the world's first female engineers. In her 86 years of life, Zamfirescu carved herself a spot in a male-dominated field, led geology labs, and studied Romanian mineral resources.

Here are five things you should know about Zamfirescu as her legacy is being honoured: 

1. She was rejected from her first school of choice due to discrimination against women

Zamfirescu, who grew up with 10 siblings, wanted to study at the School of Bridges and Roads in Bucharest after graduating high school but was rejected due to her gender.

Instead, she went to the Royal Technical University of Charlottenburg, now known as the Technical University of Berlin, where she studied mechanical engineering.

Zamfirescu enrolled in 1909 and graduated in 1912, becoming one of the first female engineers to do so in Europe.

2. She once worked for the Red Cross

Following her graduation, Zamfirescu went to work at Bucharest's Geological Institute, where she was the head of her laboratory.

During World War I, she worked for the Red Cross as a hospital manager around the small town of Mărășești, which was the site of the final major battle between Romania and Germany on the Romanian front in 1917.

3. She was a passionate and innovative worker

As part of her work as the head of her lab, Zamfirescu brought in new methods and new analysis techniques to study minerals and substances such as water, coal, and oil, according to Assistant Lecturer PhDc. Eng Iulia-Victoria Neagoe.

She is remembered as a dedicated engineer who worked long hours from morning to evening.

Zamfirescu kept working past retirement age and didn't fully retire until the age of 75 years old after a four-decade career, according to Neagoe

4. There is a street named after her 

The street where Zamfirescu lived in Bucharest was renamed after her in 1993, 20 years after her death.

This isn't the only way the engineer's name still resonates today: an award named after was created in 1997.

The "Premiul Elisa Leonida-Zamfirescu" honours female contributors to the fields of technology and science.

5. She was an advocate for international disarmament

In addition to her work as a chemical engineer, Zamfirescu took a stance in favour of disarmament, according to Neagoe.

She filed a complaint with the disarmament committee at London's Lancaster House, with a focus on the nuclear threat.

The article was originally published in the Independent. Click here to view.

Thursday, 8 November 2018

How to mass produce cell-sized robots

Technique from MIT could lead to tiny, self-powered devices for environmental, industrial, or medical monitoring.
This photo shows circles on a graphene sheet where the sheet is draped over an array of round posts, creating stresses that will cause these discs to separate from the sheet. The gray bar across the sheet is liquid being used to lift the discs from the surface.
Tiny robots no bigger than a cell could be mass-produced using a new method developed by researchers at MIT. The microscopic devices, which the team calls “syncells” (short for synthetic cells), might eventually be used to monitor conditions inside an oil or gas pipeline or to search out disease while floating through the bloodstream.


The key to making such tiny devices in large quantities lies in a method the team developed for controlling the natural fracturing process of atomically-thin, brittle materials, directing the fracture lines so that they produce minuscule pockets of a predictable size and shape. Embedded inside these pockets are electronic circuits and materials that can collect, record, and output data.

The novel process, called “autoperforation,” is described in a paper published today in the journal Nature Materials, by MIT Professor Michael Strano, postdoc Pingwei Liu, graduate student Albert Liu, and eight others at MIT.

The system uses a two-dimensional form of carbon called graphene, which forms the outer structure of the tiny syncells. One layer of the material is laid down on a surface, then tiny dots of a polymer material, containing the electronics for the devices, are deposited by a sophisticated laboratory version of an inkjet printer. Then, a second layer of graphene is laid on top.


Controlled fracturing

People think of graphene, an ultrathin but extremely strong material, as being “floppy,” but it is actually brittle, Strano explains. But rather than considering that brittleness a problem, the team figured out that it could be used to their advantage.

“We discovered that you can use the brittleness,” says Strano, who is the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “It's counterintuitive. Before this work, if you told me you could fracture a material to control its shape at the nanoscale, I would have been incredulous.”

But the new system does just that. It controls the fracturing process so that rather than generating random shards of material, like the remains of a broken window, it produces pieces of uniform shape and size. “What we discovered is that you can impose a strain field to cause the fracture to be guided, and you can use that for controlled fabrication,” Strano says.

When the top layer of graphene is placed over the array of polymer dots, which form round pillar shapes, the places where the graphene drapes over the round edges of the pillars form lines of high strain in the material. As Albert Liu describes it, “imagine a tablecloth falling slowly down onto the surface of a circular table. One can very easily visualize the developing circular strain toward the table edges, and that’s very much analogous to what happens when a flat sheet of graphene folds around these printed polymer pillars.”

As a result, the fractures are concentrated right along those boundaries, Strano says. “And then something pretty amazing happens: The graphene will completely fracture, but the fracture will be guided around the periphery of the pillar.” The result is a neat, round piece of graphene that looks as if it had been cleanly cut out by a microscopic hole punch.

Because there are two layers of graphene, above and below the polymer pillars, the two resulting disks adhere at their edges to form something like a tiny pita bread pocket, with the polymer sealed inside. “And the advantage here is that this is essentially a single step,” in contrast to many complex clean-room steps needed by other processes to try to make microscopic robotic devices, Strano says.

The researchers have also shown that other two-dimensional materials in addition to graphene, such as molybdenum disulfide and hexagonal boronitride, work just as well.

Cell-like robots

Ranging in size from that of a human red blood cell, about 10 micrometers across, up to about 10 times that size, these tiny objects “start to look and behave like a living biological cell. In fact, under a microscope, you could probably convince most people that it is a cell,” Strano says.

This work follows up on earlier research by Strano and his students on developing syncells that could gather information about the chemistry or other properties of their surroundings using sensors on their surface, and store the information for later retrieval, for example injecting a swarm of such particles in one end of a pipeline and retrieving them at the other to gain data about conditions inside it. While the new syncells do not yet have as many capabilities as the earlier ones, those were assembled individually, whereas this work demonstrates a way of easily mass-producing such devices.

Apart from the syncells’ potential uses for industrial or biomedical monitoring, the way the tiny devices are made is itself an innovation with great potential, according to Albert Liu. “This general procedure of using controlled fracture as a production method can be extended across many length scales,” he says. “[It could potentially be used with] essentially any 2-D materials of choice, in principle allowing future researchers to tailor these atomically thin surfaces into any desired shape or form for applications in other disciplines.”

This is, Albert Liu says, “one of the only ways available right now to produce stand-alone integrated microelectronics on a large scale” that can function as independent, free-floating devices. Depending on the nature of the electronics inside, the devices could be provided with capabilities for movement, detection of various chemicals or other parameters, and memory storage.

There are a wide range of potential new applications for such cell-sized robotic devices, says Strano, who details many such possible uses in a book he co-authored with Shawn Walsh, an expert at Army Research Laboratories, on the subject, called “Robotic Systems and Autonomous Platforms,” which is being published this month by Elsevier Press.

As a demonstration, the team “wrote” the letters M, I, and T into a memory array within a syncell, which stores the information as varying levels of electrical conductivity. This information can then be “read” using an electrical probe, showing that the material can function as a form of electronic memory into which data can be written, read, and erased at will. It can also retain the data without the need for power, allowing information to be collected at a later time. The researchers have demonstrated that the particles are stable over a period of months even when floating around in the water, which is a harsh solvent for electronics, according to Strano.

“I think it opens up a whole new toolkit for micro- and nanofabrication,” he says.

Daniel Goldman, a professor of physics at Georgia Tech, who was not involved with this work, says, “The techniques developed by Professor Strano’s group have the potential to create microscale intelligent devices that can accomplish tasks together that no single particle can accomplish alone.”

In addition to Strano, Pingwei Liu, who is now at Zhejiang University in China, and Albert Liu, a graduate student in the Strano lab, the team included MIT graduate student Jing Fan Yang, postdocs Daichi Kozawa, Juyao Dong, and Volodomyr Koman, Youngwoo Son PhD ’16, research affiliate Min Hao Wong, and Dartmouth College student Max Saccone and visiting scholar Song Wang. The work was supported by the Air Force Office of Scientific Research, and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.


Content originally published by David L. Chandler: MIT News

Sunday, 4 November 2018

New method peeks inside the 'black box' of artificial intelligence

Artificial intelligence specifically, machine learning is a part of daily life for computer and smartphone users. From autocorrecting typos to recommending new music, machine learning algorithms can help make life easier. They can also make mistakes.

It can be challenging for computer scientists to figure out what went wrong in such cases. This is because many machine learning algorithms learn from the information and make their predictions inside a virtual "black box," leaving few clues for researchers to follow.

A group of computer scientists at the University of Maryland has developed a promising new approach to interpreting machine learning algorithms. Unlike previous efforts, which typically sought to "break" the algorithms by removing key words from inputs to yield the wrong answer, the UMD group instead reduced the inputs to the bare minimum required to yield the correct answer. On average, the researchers got the correct answer with an input of fewer than three words.

In some cases, the researchers' model algorithms provided the correct answer based on a single word. Frequently, the input word or phrase appeared to have little obvious connection to the answer, revealing important insights into how some algorithms react to specific language. Because many algorithms are programmed to give an answer no matter what—even when prompted by a nonsensical input—the results could help computer scientists build more effective algorithms that can recognize their own limitations.

The researchers will present their work on November 4, 2018 at the 2018 Conference on Empirical Methods in Natural Language Processing.

"Black-box models do seem to work better than simpler models, such as decision trees, but even the people who wrote the initial code can't tell exactly what is happening," said Jordan Boyd-Graber, the senior author of the study and an associate professor of computer science at UMD. "When these models return incorrect or nonsensical answers, it's tough to figure out why. So instead, we tried to find the minimal input that would yield the correct result. The average input was about three words, but we could get it down to a single word in some cases."

In one example, the researchers entered a photo of a sunflower and the text-based question, "What colour is the flower?" as inputs into a model algorithm. These inputs yielded the correct answer of "yellow." After rephrasing the question into several different shorter combinations of words, the researchers found that they could get the same answer with "flower?" as the only text input for the algorithm.

In another, more complex example, the researchers used the prompt, "In 1899, John Jacob Astor IV invested $100,000 for Tesla to further develop and produce a new lighting system. Instead, Tesla used the money to fund his Colorado Springs experiments."

They then asked the algorithm, "What did Tesla spend Astor's money on?" and received the correct answer, "Colorado Springs experiments." Reducing this input to the single word "did" yielded the same correct answer.

The work reveals important insights about the rules that machine learning algorithms apply to problem-solving. Many real-world issues with algorithms result when an input that makes sense to humans results in a nonsensical answer. By showing that the opposite is also possible—that nonsensical inputs can also yield correct, sensible answers—Boyd-Graber and his colleagues demonstrate the need for algorithms that can recognize when they answer a nonsensical question with a high degree of confidence.

"The bottom line is that all this fancy machine learning stuff can actually be pretty stupid," said Boyd-Graber, who also has co-appointments at the University of Maryland Institute for Advanced Computer Studies (UMIACS) as well as UMD's College of Information Studies and Language Science Center. "When computer scientists train these models, we typically only show them real questions or real sentences. We don't show them nonsensical phrases or single words. The models don't know that they should be confused by these examples."

Most algorithms will force themselves to provide an answer, even with insufficient or conflicting data, according to Boyd-Graber. This could be at the heart of some of the incorrect or nonsensical outputs generated by machine learning algorithms—in model algorithms used for research, as well as real-world algorithms that help us by flagging spam email or offering alternate driving directions. Understanding more about these errors could help computer scientists find solutions and build more reliable algorithms.

"We show that models can be trained to know that they should be confused," Boyd-Graber said. "Then they can just come right out and say, 'You've shown me something I can't understand.'"

Provided by University of Maryland
Engineering Insights

Get an Insight into the world of engineering. Get to know about the trending topics in the field of engineering.

Pages

Follow Us