Wednesday, 8 December 2021

This robot will check your vital signs


Spot, the robot-dog, has been used in a hospital trial to triage potential Covid-19 patients. The intriguing thing is that patients seemed to be at ease with it. Image credit: Boston Dynamics


Boston Dynamics Spot robot looks like a dog (at least like a mechanical dog) and was designed to operate in a variety of complex environment, with complex meaning uneven pavement, cluttered with “stuff” (including people). This is pretty challenging for a robot, it requires both the awareness on what is going on around it and the capability to move around avoiding obstacles, taking stairs, even jumping over a barrier.


Spot is pretty good at this (watch the clip), it can walk on slippery pavements, go up a flight of stairs, find its way around obstacles and avoid bumping into people.


If you ever stop to consider the inside of an hospital you can easily perceive how complex it is to move around: carts left in the way, nurses dashing here and there,… Just the kind of place where a robot would have a challenging time.


That’s exactly the place where Spot might shine! A joint team from MIT, Boston Dynamics and Brigham and Women’s Hospital has set out to test both Spot capabilities in a hospital context and its acceptance by people (patients, medical staff and visitors). And not just to see how it can blend in, rather to test how much it can help.


They have equipped Spot with sensors to check patients and in-patients vital signs, have placed a tablet where a dog head is and use the screen and the camera to have medical staff communicating with patients. One application tested was in the triage of incoming patients to assess Covid-19 infection, thus cutting down potentially dangerous exposure.


Spot is equipped with 4 video cameras, temperature sensors able to detect pulse, breathing and blood oxygen saturation from as far as 2 meter from the patient. This distance is important to avoid contamination (although Spot gets frequently exposed to ultraviolet rays sterilising it).


Additionally, the Spot-doctor/nurse can move around the hospitals making rounds to check vital signs, including the look of patients, with an AI based image recognition software that can spot visual signs of problems by looking at a patient’s face.


Interestingly, Spot proved helpful, and that was easy to predict, and it seems that people accepted its presence with patients looking forward to its round and even attempting some chatting. It might be, as researchers were ready to admit, that its acceptance in the hospital was also fostered by the difficult times we are going through, where help, any help, is welcome.



Read the original article here

Tuesday, 2 November 2021

How OEMs and Others Can Evaluate Field Service Management Technology


The field service market lies at the cross section of customer service and support software. Providers are responsible for dispatching technicians to remote locations to provide installation, repair or maintenance services for equipment or systems. Field service management (FSM) technology helps providers manage and monitor owned and customer assets to deliver business outcomes and seamless customer experiences. When evaluating FSM technology, assess the following criteria to make an informed decision about your service transformation partner.

Look for consistent growth

As more business and consumer commerce migrates online and the field service industry navigates labor shortages, it is important to review a FSM company’s growth. A technology partner who is continually expanding their offerings and market footprint will better serve customers down the line. The seamless integration of management technology into an organisation’s customer relationship management (CRM) system and other backend programs is necessary for optimal workflows, but can cause high entry barriers, making it more cost-effective and productive to integrate the right system the first time.

When evaluating service providers, the speed of revenue generation offers insight into growth rates by year. Additionally, gauging the market verticals a partner serves can offer a view into the scope of a provider’s portfolio, which can be helpful in determining if they can serve industry objectives.

Assess the subcontractor ecosystem

The key to productive and effective field service is flexibility. Many providers must offer the ability to cover various regions at off-peak hours and service a myriad of job requests that vary in skill level. When combined with the industry’s continually ageing workforce, it is important that field service management companies can call on blended workforces and integrate quality contractors into their staff. When choosing a FSM partner, it is essential to work with providers that possess the comprehensive functionality to support the intelligent management of blended workforces, contractor onboarding, schedule optimisation and a network of available services that can be called upon for certain industries and geographies.

Additionally, a good FSM partner will not only coordinate their workforce but inform and enable technicians to provide the best service. Mobile applications and devices offer GPS tracking, telematics, knowledge management integration and work instruction management. Organizations that provide remote expert guidance for technicians and customers in the field through remote video and augmented reality (AR)-based communications systems will keep pace with technology and outlast competitors.

Evaluate the product line

Field service management products operate across multiple channels to provide holistic communication to original equipment manufacturers (OEMs), dispatchers, technicians and customers. Evaluating the digital product offerings will give companies an idea if a technology partner can provide end-to-end service and integrate well into established business practices. A quality FSM partner can tailor its products, integration packaging and template configurations to different sizes of customer, different industries and different workforce compositions.

A strong and varied product line will offer websites, supply chain solutions, third-party service-brokering solutions and analytics that will handle customer relationship data, leverage on IoT integration and offer workforce, vendor and product lifecycle management to supply superior service throughout the customer journey.

The recently published Gartner Magic Quadrant report for Field Service Management shares the latest market and consumer trends affecting the service management landscape and assesses the value of leading field service management companies.

This article was originally published on automation.com

Sunday, 12 September 2021

Toward next-generation brain-computer interface systems

Brain-computer interfaces (BCIs) are emerging assistive devices that may one day help people with brain or spinal injuries to move or communicate. BCI systems depend on implantable sensors that record electrical signals in the brain and use those signals to drive external devices like computers or robotic prosthetics.

Abstract concept illustrating brain-computer interface (stock image). Credit: © Dana.S / stock.adobe.com


Most current BCI systems use one or two sensors to sample up to a few hundred neurons, but neuroscientists are interested in systems that are able to gather data from much larger groups of brain cells.


Now, a team of researchers has taken a key step toward a new concept for a future BCI system -- one that employs a coordinated network of independent, wireless microscale neural sensors, each about the size of a grain of salt, to record and stimulate brain activity. The sensors, dubbed "neurograins," independently record the electrical pulses made by firing neurons and send the signals wirelessly to a central hub, which coordinates and processes the signals.


In a study published on August 12 in Nature Electronics, the research team demonstrated the use of nearly 50 such autonomous neurograins to record neural activity in a rodent.


The results, the researchers say, are a step toward a system that could one day enable the recording of brain signals in unprecedented detail, leading to new insights into how the brain works and new therapies for people with brain or spinal injuries.


"One of the big challenges in the field of brain-computer interfaces is engineering ways of probing as many points in the brain as possible," said Arto Nurmikko, a professor in Brown's School of Engineering and the study's senior author. "Up to now, most BCIs have been monolithic devices -- a bit like little beds of needles. Our team's idea was to break up that monolith into tiny sensors that could be distributed across the cerebral cortex. That's what we've been able to demonstrate here."

The team, which includes experts from Brown, Baylor University, University of California at San Diego and Qualcomm, began the work of developing the system about four years ago. The challenge was two-fold, said Nurmikko, who is affiliated with Brown's Carney Institute for Brain Science. The first part required shrinking the complex electronics involved in detecting, amplifying and transmitting neural signals into the tiny silicon neurograin chips. The team first designed and simulated the electronics on a computer, and went through several fabrication iterations to develop operational chips.


The second challenge was developing the body-external communications hub that receives signals from those tiny chips. The device is a thin patch, about the size of a thumb print, that attaches to the scalp outside the skull. It works like a miniature cellular phone tower, employing a network protocol to coordinate the signals from the neurograins, each of which has its own network address. The patch also supplies power wirelessly to the neurograins, which are designed to operate using a minimal amount of electricity.


"This work was a true multidisciplinary challenge," said Jihun Lee, a postdoctoral researcher at Brown and the study's lead author. "We had to bring together expertise in electromagnetics, radio frequency communication, circuit design, fabrication and neuroscience to design and operate the neurograin system."


The goal of this new study was to demonstrate that the system could record neural signals from a living brain -- in this case, the brain of a rodent. The team placed 48 neurograins on the animal's cerebral cortex, the outer layer of the brain, and successfully recorded characteristic neural signals associated with spontaneous brain activity.

The team also tested the devices' ability to stimulate the brain as well as record from it. Stimulation is done with tiny electrical pulses that can activate neural activity. The stimulation is driven by the same hub that coordinates neural recording and could one day restore brain function lost to illness or injury, researchers hope.


The size of the animal's brain limited the team to 48 neurograins for this study, but the data suggest that the current configuration of the system could support up to 770. Ultimately, the team envisions scaling up to many thousands of neurograins, which would provide a currently unattainable picture of brain activity.


"It was a challenging endeavor, as the system demands simultaneous wireless power transfer and networking at the mega-bit-per-second rate, and this has to be accomplished under extremely tight silicon area and power constraints," said Vincent Leung, an associate professor in the Department of Electrical and Computer Engineering at Baylor. "Our team pushed the envelope for distributed neural implants."


There's much more work to be done to make that complete system a reality, but researchers said this study represents a key step in that direction.


"Our hope is that we can ultimately develop a system that provides new scientific insights into the brain and new therapies that can help people affected by devastating injuries," Nurmikko said.


Other co-authors on the research were Ah-Hyoung Lee (Brown), Jiannan Huang (UCSD), Peter Asbeck (UCSD), Patrick P. Mercier (UCSD), Stephen Shellhammer (Qualcomm), Lawrence Larson (Brown) and Farah Laiwalla (Brown). The research was supported by the Defense Advanced Research Projects Agency (N66001-17-C-4013).


Materials provided by Brown University. Note: Content may be edited for style and length.

Brown University. "Toward next-generation brain-computer interface systems." ScienceDaily. ScienceDaily, 12 August 2021. <www.sciencedaily.com/releases/2021/08/210812135910.htm>.

Saturday, 19 June 2021

The Double-Diamond Model of Design

Designers often start by questioning the problem given to them: they expand the scope of the problem, diverging to examine all the fundamental issues that underlie it. Then they converge upon a single problem statement. During the solution phase of their studies, they first expand the space of possible solutions, the divergence phase. Finally, they converge upon a proposed solution (Figure 6.1). This double diverge-converge pattern was first introduced in 2005 by the British Design Council, which called it the double-diamond design process model. 
The Double-Diamond Model of Design. Start with an idea, and through the initial design research, expand the thinking to explore the fundamental issues. Only then is it time to converge upon the real, underlying problem. Similarly, use design research tools to explore a wide variety of solutions before converging upon one. (Slightly modified from the work of the British Design Council, 2005.)

The Design Council divided the design process into four stages: “discover” and “define”—for the divergence and convergence phases of finding the right problem, and “develop” and “deliver”—for the divergence and convergence phases of finding the right solution. The double diverge-converge process is quite effective at freeing designers from unnecessary restrictions to the problem and solution spaces. But you can sympathize with a product manager who, having given the designers a problem to solve, finds them questioning the assignment and insisting on travelling all over the world to seek deeper understanding. Even when the designers start focusing upon the problem, they do not seem to make progress, but instead develop a wide variety of ideas and thoughts, many only half-formed, many clearly impractical. All this can be rather unsettling to the product manager who, concerned about meeting the schedule, wants to see immediate convergence. 

To add to the frustration of the product manager, as the designers start to converge upon a solution, they may realize that they have inappropriately formulated the problem, so the entire process must be repeated (although it can go more quickly this time). This repeated divergence and convergence is important in properly determining the right problem to be solved and then the best way to solve it. It looks chaotic and ill-structured, but it actually follows well-established principles and procedures. How does the product manager keep the entire team on schedule despite the apparently random and divergent methods of designers? Encourage their free exploration, but hold them to the schedule (and budget) constraints. There is nothing like a firm deadline to get creative minds to reach convergence. 


Extracted from The Design of Everyday Things by Don Norman

Sunday, 18 April 2021

Smartphone Camera Senses Patients' Pulse, Breathing Rate

AI app could enable doctors to take contactless vitals during telemedicine visits


Telehealth visits increased dramatically when the pandemic began—by over 4000% in the U.S., by one account. But there’s a limit to what doctors can accomplish during these virtual appointments. Namely, they can’t check patients’ vital signs over the phone.

But new technologies in the works could change that by equipping phones with reliable software that can measure a person’s key biometrics. This month at a conference held by the Association for Computing Machinery, researchers presented machine learning systems that can generate a personalized model to measure heart and breathing rates based on a short video taken with a smartphone camera.  

With just an 18-second video clip of a person’s head and shoulders, the algorithm can determine heart rate, or pulse, based on the changes in light intensity reflected off the skin. Breathing rate, or respiration, is gleaned from the rhythmic motion of their head, shoulders and chest. 

Daniel McDuff, a principal researcher at Microsoft Research, and PhD student Xin Liu at the University of Washington developed the system. “Currently there’s no way to do remote vitals collection except for a very small minority of patients who have medical-grade devices at home,” such as a pulse oximeter to detect heart rate and blood oxygen level, or a blood pressure cuff, says McDuff.

Most people don’t own those devices, so for the vast majority of virtual appointments, patients must arrange separate in-person appointments to get these measurements. “That’s doubly inefficient. It takes twice the amount of time as a typical in-person visit, and with less human interaction,” McDuff says.

Video-based software that can collect vitals during a telehealth appointment would greatly streamline virtual health care. Work on this type of technology arose around 2007
when digital cameras became sensitive enough to pick up small pixel-level changes in skin that indicate blood volume. The field saw a fresh wave of interest after telehealth visits increased during the early part of the COVID-19 pandemic.

Several groups globally have been developing non-contact, video-based vitals sensing. A group out of Oxford is developing optical remote monitoring of vitals for patients in hospital intensive care units or undergoing kidney dialysis. Rice University researchers are developing a device that monitors vehicle drivers for heart attacks.

Google in February announced that its Android-based health tracking platform Google Fit will measure heart and respiratory rate using the phone’s camera. The user places a finger over the rear-facing camera on the phone to get heart rate, and a video of the user’s face gathers breathing rate. The software is meant for wellness purposes rather than medical use or doctor visits. 

The challenge facing researchers in this field is developing technologies that work consistently at a high level of accuracy in real-world settings, where faces and lighting vary. The approach developed by McDuff and Liu aims to address that. 

In their approach, heart rate is determined by measuring light reflected from the skin. “Variations in blood volume influences how light is reflected from the skin,” says McDuff. “So the camera is picking up micro-changes in light intensity and that can be used to recover a pulse signal. From that, we can derive heart rate variation and detect things like arrhythmias.”

The algorithm must account for variables such as skin colour, facial hair, lighting, and clothing. Those tend to trip up just about any kind of facial recognition technology, in part because the datasets on which machine learning algorithms are trained aren’t representative of our diverse population. 

McDuff’s model faces an added challenge: “Darker skin types have higher melanin, so the light reflectance intensity is going to be lower because more light is absorbed,” he says. That results in a weaker signal-to-noise ratio, making it harder to detect the pulse signal. “So it's about having a representative training set, and there’s also a fundamental physical challenge we need to solve here,” says McDuff.

To address this challenge, the team developed a system with a personalized machine learning algorithm for each individual. “We proposed a learning algorithm to learn a person’s physiological signals quickly,” says Liu. The system can provide results with just 18 seconds of video, he says.

Compared with a standard medical-grade device, the proposed method has a mean absolute error of one to three beats per minute in estimating heart rate, Liu says. This is acceptable in many applications.   

The system isn’t ready for medical use and will need to be validated in clinical trials. To improve the robustness of the system, one approach the team is taking is to train models on computer-generated images. “We can actually synthesize high fidelity avatars that exhibit these blood flow patterns and respiration patterns, and we can train our algorithm on the computer-generated data,” says McDuff.  

The technology could have both medical and fitness applications, the researchers say. In addition to telehealth visits, remote vitals can be useful for people with chronic health conditions who need frequent, accurate biometric measurements. 

Article originally published on IEEE Spectrum

Tuesday, 21 January 2020

Root User in Ubuntu: Important Things You Should Know

Root User in Ubuntu: Important Things You Should Know 


When you have just started using Linux, you’ll find many things that are different from Windows. One of those ‘different things’ is the concept of the root user.

Get to know about few important things about the root user in Ubuntu from It's Foss.

Follow the links to learn the following from the author's website:

Thursday, 9 January 2020

Researchers produce first laser ultrasound images of humans

Technique may help remotely image and assess health of infants, burn victims, and accident survivors in hard-to-reach places.

For most people, getting an ultrasound is a relatively easy procedure: As a technician gently presses a probe against a patient’s skin, sound waves generated by the probe travel through the skin, bouncing off muscle, fat, and other soft tissues before reflecting back to the probe, which detects and translates the waves into an image of what lies beneath.

Conventional ultrasound doesn’t expose patients to harmful radiation as X-ray and CT scanners do, and it’s generally noninvasive. But it does require contact with a patient’s body, and as such, may be limiting in situations where clinicians might want to image patients who don’t tolerate the probe well, such as babies, burn victims, or other patients with sensitive skin. Furthermore, ultrasound probe contact induces significant image variability, which is a major challenge in modern ultrasound imaging.

Now, MIT engineers have come up with an alternative to conventional ultrasound that doesn’t require contact with the body to see inside a patient. The new laser ultrasound technique leverages an eye- and skin-safe laser system to remotely image the inside of a person. When trained on a patient’s skin, one laser remotely generates sound waves that bounce through the body. A second laser remotely detects the reflected waves, which researchers then translate into an image similar to conventional ultrasound.

In a paper published today by Nature in the journal Light: Science and Applications, the team reports generating the first laser ultrasound images in humans. The researchers scanned the forearms of several volunteers and observed common tissue features such as muscle, fat, and bone, down to about 6 centimetres below the skin. These images, comparable to conventional ultrasound, were produced using remote lasers focused on a volunteer from half a meter away.

“We’re at the beginning of what we could do with laser ultrasound,” says Brian W. Anthony, a principal research scientist in MIT’s Department of Mechanical Engineering and Institute for Medical Engineering and Science (TIMES), a senior author on the paper. “Imagine we get to a point where we can do everything ultrasound can do now, but at a distance. This gives you a whole new way of seeing organs inside the body and determining properties of deep tissue, without making contact with the patient.”

Early concepts for non contact laser ultrasound for medical imaging originated from a Lincoln Laboratory program established by Rob Haupt of the Active Optical Systems Group and Chuck Wynn of the Advanced Capabilities and Technologies Group, who are co-authors on the new paper along with Matthew Johnson. From there, the research grew via collaboration with Anthony and his students, Xiang (Shawn) Zhang, who is now an MIT postdoc and is the paper’s first author, and recent doctoral graduate Jonathan Fincke, who is also a co-author. The project combined the Lincoln Laboratory researchers’ expertise in laser and optical systems with the Anthony group's experience with advanced ultrasound systems and medical image reconstruction.

Yelling into a canyon — with a flashlight

In recent years, researchers have explored laser-based methods in ultrasound excitation in a field known as photo acoustics. Instead of directly sending sound waves into the body, the idea is to send in light, in the form of a pulsed laser tuned at a particular wavelength, that penetrates the skin and is absorbed by blood vessels.

The blood vessels rapidly expand and relax — instantly heated by a laser pulse then rapidly cooled by the body back to their original size — only to be struck again by another light pulse. The resulting mechanical vibrations generate sound waves that travel back up, where they can be detected by transducers placed on the skin and translated into a photo acoustic image.

While photo acoustics uses lasers to remotely probe internal structures, the technique still requires a detector in direct contact with the body in order to pick up the sound waves. What’s more, light can only travel a short distance into the skin before fading away. As a result, other researchers have used photo acoustics to image blood vessels just beneath the skin, but not much deeper.

Since sound waves travel further into the body than light, Zhang, Anthony, and their colleagues looked for a way to convert a laser beam’s light into sound waves at the surface of the skin, in order to image deeper in the body. 

Based on their research, the team selected 1,550-nano meter lasers, a wavelength which is highly absorbed by water (and is eye- and skin-safe with a large safety margin).  As skin is essentially composed of water, the team reasoned that it should efficiently absorb this light, and heat up and expand in response. As it oscillates back to its normal state, the skin itself should produce sound waves that propagate through the body.

The researchers tested this idea with a laser setup, using one pulsed laser set at 1,550 nano-meters to generate sound waves, and a second continuous laser, tuned to the same wavelength, to remotely detect reflected sound waves.  This second laser is a sensitive motion detector that measures vibrations on the skin surface caused by the sound waves bouncing off muscle, fat, and other tissues. Skin surface motion, generated by the reflected sound waves, causes a change in the laser’s frequency, which can be measured. By mechanically scanning the lasers over the body, scientists can acquire data at different locations and generate an image of the region.

“It’s like we’re constantly yelling into the Grand Canyon while walking along the wall and listening at different locations,” Anthony says. “That then gives you enough data to figure out the geometry of all the things inside that the waves bounced against — and the yelling is done with a flashlight.”

In-home imaging

The researchers first used the new setup to image metal objects embedded in a gelatin mold roughly resembling skin’s water content. They imaged the same gelatin using a commercial ultrasound probe and found both images were encouragingly similar. They moved on to image excised animal tissue — in this case, pig skin — where they found laser ultrasound could distinguish subtler features, such as the boundary between muscle, fat, and bone.

Finally, the team carried out the first laser ultrasound experiments in humans, using a protocol that was approved by the MIT Committee on the Use of Humans as Experimental Subjects. After scanning the forearms of several healthy volunteers, the researchers produced the first fully non contact laser ultrasound images of a human. The fat, muscle, and tissue boundaries are clearly visible and comparable to images generated using commercial, contact-based ultrasound probes.

The researchers plan to improve their technique, and they are looking for ways to boost the system’s performance to resolve fine features in the tissue. They are also looking to hone the detection laser’s capabilities. Further down the road, they hope to miniaturise the laser setup, so that laser ultrasound might one day be deployed as a portable device.

“I can imagine a scenario where you’re able to do this in the home,” Anthony says. “When I get up in the morning, I can get an image of my thyroid or arteries, and can have in-home physiological imaging inside of my body. You could imagine deploying this in the ambient environment to get an understanding of your internal state.” 

This research was supported in part by the MIT Lincoln Laboratory Biomedical Line Program for the United States Air Force and by the U.S. Army Medical Research and Material Command's Military Operational Medicine Research Program.
Content credits:Jennifer Chu  http://news.mit.edu/
Engineering Insights

Get an Insight into the world of engineering. Get to know about the trending topics in the field of engineering.

Pages

Follow Us