Tuesday, 21 January 2020

Root User in Ubuntu: Important Things You Should Know

Root User in Ubuntu: Important Things You Should Know 


When you have just started using Linux, you’ll find many things that are different from Windows. One of those ‘different things’ is the concept of the root user.

Get to know about few important things about the root user in Ubuntu from It's Foss.

Follow the links to learn the following from the author's website:

Thursday, 9 January 2020

Researchers produce first laser ultrasound images of humans

Technique may help remotely image and assess health of infants, burn victims, and accident survivors in hard-to-reach places.

For most people, getting an ultrasound is a relatively easy procedure: As a technician gently presses a probe against a patient’s skin, sound waves generated by the probe travel through the skin, bouncing off muscle, fat, and other soft tissues before reflecting back to the probe, which detects and translates the waves into an image of what lies beneath.

Conventional ultrasound doesn’t expose patients to harmful radiation as X-ray and CT scanners do, and it’s generally noninvasive. But it does require contact with a patient’s body, and as such, may be limiting in situations where clinicians might want to image patients who don’t tolerate the probe well, such as babies, burn victims, or other patients with sensitive skin. Furthermore, ultrasound probe contact induces significant image variability, which is a major challenge in modern ultrasound imaging.

Now, MIT engineers have come up with an alternative to conventional ultrasound that doesn’t require contact with the body to see inside a patient. The new laser ultrasound technique leverages an eye- and skin-safe laser system to remotely image the inside of a person. When trained on a patient’s skin, one laser remotely generates sound waves that bounce through the body. A second laser remotely detects the reflected waves, which researchers then translate into an image similar to conventional ultrasound.

In a paper published today by Nature in the journal Light: Science and Applications, the team reports generating the first laser ultrasound images in humans. The researchers scanned the forearms of several volunteers and observed common tissue features such as muscle, fat, and bone, down to about 6 centimetres below the skin. These images, comparable to conventional ultrasound, were produced using remote lasers focused on a volunteer from half a meter away.

“We’re at the beginning of what we could do with laser ultrasound,” says Brian W. Anthony, a principal research scientist in MIT’s Department of Mechanical Engineering and Institute for Medical Engineering and Science (TIMES), a senior author on the paper. “Imagine we get to a point where we can do everything ultrasound can do now, but at a distance. This gives you a whole new way of seeing organs inside the body and determining properties of deep tissue, without making contact with the patient.”

Early concepts for non contact laser ultrasound for medical imaging originated from a Lincoln Laboratory program established by Rob Haupt of the Active Optical Systems Group and Chuck Wynn of the Advanced Capabilities and Technologies Group, who are co-authors on the new paper along with Matthew Johnson. From there, the research grew via collaboration with Anthony and his students, Xiang (Shawn) Zhang, who is now an MIT postdoc and is the paper’s first author, and recent doctoral graduate Jonathan Fincke, who is also a co-author. The project combined the Lincoln Laboratory researchers’ expertise in laser and optical systems with the Anthony group's experience with advanced ultrasound systems and medical image reconstruction.

Yelling into a canyon — with a flashlight

In recent years, researchers have explored laser-based methods in ultrasound excitation in a field known as photo acoustics. Instead of directly sending sound waves into the body, the idea is to send in light, in the form of a pulsed laser tuned at a particular wavelength, that penetrates the skin and is absorbed by blood vessels.

The blood vessels rapidly expand and relax — instantly heated by a laser pulse then rapidly cooled by the body back to their original size — only to be struck again by another light pulse. The resulting mechanical vibrations generate sound waves that travel back up, where they can be detected by transducers placed on the skin and translated into a photo acoustic image.

While photo acoustics uses lasers to remotely probe internal structures, the technique still requires a detector in direct contact with the body in order to pick up the sound waves. What’s more, light can only travel a short distance into the skin before fading away. As a result, other researchers have used photo acoustics to image blood vessels just beneath the skin, but not much deeper.

Since sound waves travel further into the body than light, Zhang, Anthony, and their colleagues looked for a way to convert a laser beam’s light into sound waves at the surface of the skin, in order to image deeper in the body. 

Based on their research, the team selected 1,550-nano meter lasers, a wavelength which is highly absorbed by water (and is eye- and skin-safe with a large safety margin).  As skin is essentially composed of water, the team reasoned that it should efficiently absorb this light, and heat up and expand in response. As it oscillates back to its normal state, the skin itself should produce sound waves that propagate through the body.

The researchers tested this idea with a laser setup, using one pulsed laser set at 1,550 nano-meters to generate sound waves, and a second continuous laser, tuned to the same wavelength, to remotely detect reflected sound waves.  This second laser is a sensitive motion detector that measures vibrations on the skin surface caused by the sound waves bouncing off muscle, fat, and other tissues. Skin surface motion, generated by the reflected sound waves, causes a change in the laser’s frequency, which can be measured. By mechanically scanning the lasers over the body, scientists can acquire data at different locations and generate an image of the region.

“It’s like we’re constantly yelling into the Grand Canyon while walking along the wall and listening at different locations,” Anthony says. “That then gives you enough data to figure out the geometry of all the things inside that the waves bounced against — and the yelling is done with a flashlight.”

In-home imaging

The researchers first used the new setup to image metal objects embedded in a gelatin mold roughly resembling skin’s water content. They imaged the same gelatin using a commercial ultrasound probe and found both images were encouragingly similar. They moved on to image excised animal tissue — in this case, pig skin — where they found laser ultrasound could distinguish subtler features, such as the boundary between muscle, fat, and bone.

Finally, the team carried out the first laser ultrasound experiments in humans, using a protocol that was approved by the MIT Committee on the Use of Humans as Experimental Subjects. After scanning the forearms of several healthy volunteers, the researchers produced the first fully non contact laser ultrasound images of a human. The fat, muscle, and tissue boundaries are clearly visible and comparable to images generated using commercial, contact-based ultrasound probes.

The researchers plan to improve their technique, and they are looking for ways to boost the system’s performance to resolve fine features in the tissue. They are also looking to hone the detection laser’s capabilities. Further down the road, they hope to miniaturise the laser setup, so that laser ultrasound might one day be deployed as a portable device.

“I can imagine a scenario where you’re able to do this in the home,” Anthony says. “When I get up in the morning, I can get an image of my thyroid or arteries, and can have in-home physiological imaging inside of my body. You could imagine deploying this in the ambient environment to get an understanding of your internal state.” 

This research was supported in part by the MIT Lincoln Laboratory Biomedical Line Program for the United States Air Force and by the U.S. Army Medical Research and Material Command's Military Operational Medicine Research Program.
Content credits:Jennifer Chu  http://news.mit.edu/

Monday, 6 January 2020

Tool predicts how fast code will run on a chip

The machine-learning system should enable developers to improve computing efficiency in a range of applications.
MIT researchers have invented a machine-learning tool that predicts how fast computer chips will execute code from various applications.  
To get code to run as fast as possible, developers and compilers — programs that translate programming language into machine-readable code — typically use performance models that run the code through a simulation of given chip architectures. 
Compilers use that information to automatically optimise code, and developers use it to tackle performance bottlenecks on the microprocessors that will run it. But performance models for machine code are handwritten by a relatively small group of experts and are not adequately validated. As a consequence, the simulated performance measurements often deviate from real-life results. 
In a series of conference papers, the researchers describe a novel machine-learning pipeline that automates this process, making it easier, faster, and more accurate. In a paper presented at the International Conference on Machine Learning in June, the researchers presented Ithemal, a neural-network model that trains on labelled data in the form of “basic blocks” — fundamental snippets of computing instructions — to automatically predict how long it takes a given chip to execute previously unseen basic blocks. Results suggest Ithemal performs far more accurately than traditional hand-tuned models. 
Then, at the November IEEE International Symposium on Workload Characterization, the researchers presented a benchmark suite of basic blocks from a variety of domains, including machine learning, compilers, cryptography, and graphics that can be used to validate performance models. They pooled more than 300,000 of the profiled blocks into an open-source dataset called BHive. During their evaluations, Ithemal predicted how fast Intel chips would run code even better than a performance model built by Intel itself. 
Ultimately, developers and compilers can use the tool to generate code that runs faster and more efficiently on an ever-growing number of diverse and “black box” chip designs. “Modern computer processors are opaque, horrendously complicated, and difficult to understand. It is also incredibly challenging to write computer code that executes as fast as possible for these processors,” says co-author Michael Carbin, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This tool is a big step forward toward fully modelling the performance of these chips for improved efficiency.”
Most recently, in a paper presented at the NeurIPS conference in December, the team proposed a new technique to automatically generate compiler optimisations.  Specifically, they automatically generate an algorithm, called Vemal, that converts certain code into vectors, which can be used for parallel computing. Vemal outperforms hand-crafted vectorisation algorithms used in the LLVM compiler — a popular compiler used in the industry.
Learning from data
Designing performance models by hand can be “a black art,” Carbin says. Intel provides extensive documentation of more than 3,000 pages describing its chips’ architectures. But there currently exists only a small group of experts who will build performance models that simulate the execution of code on those architectures. 
“Intel’s documents are neither error-free nor complete, and Intel will omit certain things, because it’s proprietary,” Mendis says. “However, when you use data, you don’t need to know the documentation. If there’s something hidden, you can learn it directly from the data.”
To do so, the researchers clocked the average number of cycles a given microprocessor takes to compute basic block instructions — basically, the sequence of boot-up, execute, and shut down — without human intervention. Automating the process enables rapid profiling of hundreds of thousands or millions of blocks. 
Domain-specific architectures
In training, the Ithemal model analyses millions of automatically profiled basic blocks to learn exactly how different chip architectures will execute computation. Importantly, Ithemal takes raw text as input and does not require manually adding features to the input data. In testing, Ithemal can be fed previously unseen basic blocks and a given chip and will generate a single number indicating how fast the chip will execute that code. 
The researchers found Ithemal cut error rates in accuracy — meaning the difference between the predicted speed versus real-world speed — by 50 per cent over traditional hand-crafted models. Further, in their next paper, they showed that Ithemal’s error rate was 10 per cent, while the Intel performance-prediction model’s error rate was 20 per cent on a variety of basic blocks across multiple different domains.
The tool now makes it easier to quickly learn performance speeds for any new chip architectures, Mendis says. For instance, domain-specific architectures, such as Google’s new Tensor Processing Unit used specifically for neural networks, are now being built but aren’t widely understood. “If you want to train a model on some new architecture, you just collect more data from that architecture, run it through our profiler, use that information to train Ithemal, and now you have a model that predicts performance,” Mendis says.
Next, the researchers are studying methods to make models interpretable. Much of machine learning is a black box, so it’s not really clear why a particular model made its predictions. “Our model is saying it takes a processor, say, 10 cycles to execute a basic block. Now, we’re trying to figure out why,” Carbin says. “That’s a fine level of granularity that would be amazing for these types of tools.”
They also hope to use Ithemal to enhance the performance of Vemal even further and achieve better performance automatically.
This article was originally published by MIT News. Follow this link to read related articles.

Sunday, 5 January 2020

Welcome 2020



Stay tuned for the fascinating changes at www.engineeringinsights.in
Engineering Insights

Get an Insight into the world of engineering. Get to know about the trending topics in the field of engineering.

Pages

Follow Us