©UPI
Wearable technology has advanced to the point that it is no longer just a technology of the future. From Apple watches to Fitbit trackers, these devices have become integrated into our daily lives. How would you feel if we told you that you could wear a stethoscope?
Jae-Woong Jeong, a professor of engineering at the University of Colorado, Boulder and Yonggang Huang and John Rogers of Northwestern have developed epidermal mechano-acoustic sensors that are wearable and can monitor heartbeats and recognize speech. It can stick to any surface of the body, even curvy areas like the neck. Devices that allow integration with an external surface of the body are called epidermal electronics. Thanks to recent progress, the range of physiological measurements has expanded within the possible wearable device platforms.
There have been many developments of skin-mounted electronics that integrate electrophysiological sensors such as electrocardiogram (ECG) and EMG sensors, temperature sensors, strain sensors, and many others. According to Jeong, sensing of acoustic signals from the body has not been explored before. Their work was about “the development and investigation of wearable acoustic sensors.”
This device can pick up mechanical waves that spread through tissues and fluids in the human body that reveal acoustical characteristic signatures, which helps diagnose cardiovascular diseases. For instance, it can recognize and record the opening and closing of heart valves, vibrations of the vocal cords, the contraction of skeletal muscles, and movement in the gastrointestinal tract.
Existing wearable electronic devices capable of measuring the rate and rhythm of heartbeats using ECG technology has limitations in diagnosing heart failure. However, structural defects in heart valves that do not appear in ECG signals can be picked up using acoustic signals.
In this way, physiological mechano-acoustic signals provide useful information for clinical diagnosis. Clinical tests at Camp Lowell Cardiology involved recording cardiac mechano-acoustic responses with ECG signals from eight patient volunteers who are diagnosed with cardiac valvular stenosis or regurgitation. All vibration signals were converted from output voltage to “mechano-acoustic response (arbitrary units)” and they were able to detect heart murmurs.
Figure 1. An epidermal mechano-acoustic device works like a stretchable patch; it can be applied onto any external surface of the body up to 2 weeks. / ©Liu et al., 2016
This device not only helps with human studies on patients with cardiovascular diseases, but also human-machine interfaces for video game control using voice commands. The device made speech recognition possible by capturing vibrations of the larynx (“voice box”) without the interference of ambient noise in the background. For an example application, an isolated word detection system along with epidermal mechano-acoustic sensor was used to play Pac Man:
The key characteristics of the materials and structures of the device are soft mechanics, water-permeability, adhesiveness, bio-compatible surfaces needed for comfortable and durable integration on the skin. It has a low density as the mass of the sensor system is important, for it increases the mechanical loading at the skin interface, thereby decreasing the mechano-acoustic motions. Further research has shown that loud noises did not affect the sensor, which means that it can be used for communication in loud environments.
Weighing only 216.6mg and 2 mm in thickness, this mechano-acoustic electrophysiological sensing platform is made of soft, stretchable material with miniaturized, low-power accelerometers that have bandwidths of 0.5-550 Hz (which is in between the range of targeted cardiovascular sounds and speech).
The device incorporates a 3-axis accelerometer (Analog Devices ADXL335), a preamplifier (STMicroelectronics TSV991A), resistors, capacitor, low-pass and high-pass filters (for removing motion artifacts), and removable and reusable capacitive electrodes for electrophysiology (EP) recording.
Figure 2. Circuit layout of the device before and after chip bonding / ©Liu et al., 2016
Figure 3. Circuit diagram of the device / ©Liu et al., 2016
Figure 4. Exploded view of the structure of the device (top). Illustration of the assembled device (bottom). / ©Liu et al., 2016
The fabrication process is comprised of three parts: (i) patterning of the circuit interconnects; (ii) transfer-printing and chip-bonding onto a soft, core/shell substrate; and (iii) covering the top surface with a similar soft core/shell structure. 3 μm copper (Cu) traces are placed between two layers of 1.2 μm polyimides (PI). The circuits and the micro-scale chip are encapsulated by ultra-low modulus elastomers (Silbione RT Gel), and low modulus silicone (Ecoflex) layers the top and bottom of the encapsulation. This double encapsulation serves to improve flexibility and stretchability of the device.
Figure 5. Cross-sectional schematic illustration of the assembled device / ©Liu et al., 2016
Speech-based human-machine interface
When placed on the skin, the device can capture both electromyogram (EMG) signals from articulator muscles and acoustic vibrations from the vocal cord simultaneously. A person speaking the following four words was recorded and it was shown to have different time-frequency characteristics for each of the words: “left,” “right,” “up,” and “down.” Close contact between the sensors and the skin prevented ambient noise interference.
So how does this mechano-acoustic device recognize speech? An isolated word detection system is used. When the researchers used this system to play Pac-Man, the system began with the four voice commands: “left,” “right,” “up,” and “down.” The words are processed via the epidermal sensor, which picks up the speech. During preprocessing, ambient noise in the background is reduced using spectral subtraction and digitally filtered for accurate speech data classification. Finally, classification occurs in real time using linear discriminant analysis (LDA).
Figure 6. Process loop for a speech-based human-machine interface / ©Liu et al., 2016
Future research involves the integration of wireless capability in data transfer, data processing unit, and power supply. Further tests and validations should be done for clinical applications. As for speech recognition, the device demonstrates clear recording even with ambient noise, which suggests possible adaptation for transmission of clear voice signals in noisy environments like the battlefield. Researchers also explain that their speech recognition strategies can be applied to other types of human-machine interfaces, such as drone and prosthesis control.