<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>detection Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/detection/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/detection/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 26 Jun 2021 09:37:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>DHS Awards $2M for Small Businesses to Develop Machine Learning for Detection Technologies</title>
		<link>https://www.aiuniverse.xyz/dhs-awards-2m-for-small-businesses-to-develop-machine-learning-for-detection-technologies/</link>
					<comments>https://www.aiuniverse.xyz/dhs-awards-2m-for-small-businesses-to-develop-machine-learning-for-detection-technologies/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 26 Jun 2021 09:37:56 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Awards]]></category>
		<category><![CDATA[Businesses]]></category>
		<category><![CDATA[detection]]></category>
		<category><![CDATA[Develop]]></category>
		<category><![CDATA[DHS]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14579</guid>

					<description><![CDATA[<p>Source &#8211; https://www.hstoday.us/ The Department of Homeland Security (DHS) Small Business Innovation Research (SBIR) Program recently awarded funding to two small businesses to develop non-contact, inexpensive machine learning training and classification technologies. Integrated machine learning platforms can significantly reduce time, redundancy, cost, and improve the accuracy in detecting threats such as explosives, chemical agents, and narcotics. “S&#38;T <a class="read-more-link" href="https://www.aiuniverse.xyz/dhs-awards-2m-for-small-businesses-to-develop-machine-learning-for-detection-technologies/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/dhs-awards-2m-for-small-businesses-to-develop-machine-learning-for-detection-technologies/">DHS Awards $2M for Small Businesses to Develop Machine Learning for Detection Technologies</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.hstoday.us/</p>



<p>The Department of Homeland Security (DHS) Small Business Innovation Research (SBIR) Program recently awarded funding to two small businesses to develop non-contact, inexpensive machine learning training and classification technologies. Integrated machine learning platforms can significantly reduce time, redundancy, cost, and improve the accuracy in detecting threats such as explosives, chemical agents, and narcotics.</p>



<p>“S&amp;T embraces the significant advances in artificial intelligence and machine learning capabilities and their ability to enhance threat detection,” said Kathryn Coulter Mitchell, DHS Senior Official Performing the Duties of the Under Secretary for Science and Technology. “The SBIR Program provides the opportunity for S&amp;T to partner with innovative small businesses and develop machine learning tools critical to addressing threat detection needs. I am looking forward to seeing the technologies that will be developed by these SBIR efforts.”</p>



<p>Physical Sciences Inc. (PSI), based in Andover, MA, and Alakai Defense Systems, Inc. (Alakai), based in Largo, FL, each received approximately $1 million in SBIR Phase II funding to develop technologies that can rapidly and accurately identify unknown spectrometer signals as safe or threatening. The DHS SBIR Program, managed by Program Director Dusty Lang and administered at the DHS Science and Technology Directorate (S&amp;T), selected PSI and Alakai to participate in Phase II of the program subsequent to demonstration of feasibility in Phase I, for each companies’ compact, accurate and rapid classification Machine Learning Module for Detection Technologies solutions.</p>



<p>Under Phase II, PSI will continue to develop their deep-learning algorithm for detection and classification of trace explosives, opioids, and narcotics on surfaces, for optical spectroscopic systems. PSI will extend the algorithm’s capabilities from infrared reflectance spectroscopy to include Raman spectroscopy, as well as a proposed operational module prototype, which will have a classification accuracy of greater than 90 percent.</p>



<p>During their Phase II efforts, Alakai, will continue development of the Agnostic Machine Learning Platform for Spectroscopy (AMPS) that rapidly and accurately detects trace quantities of hazardous and related chemicals from a variety of spectroscopic instruments.</p>



<p>“Our impetus for developing these machine-learning modules stems from the Transportation Security Administration’s operational needs for threat signature fusion, the ability to learn, detect and classify new threats without being explicitly programmed, and, ultimately, increase accuracy of detection,” said Thoi Nguyen, DHS S&amp;T Program Manager for the Next Generation Explosive Trace Detection (NGETD) Program. “With experienced industrial partners like Alakai and PSI, and our strong collaboration with TSA, we hope these efforts will contribute to wider applications of machine learning across the Homeland Security mission space.”</p>



<p>At the completion of the 24-month Phase II contract, SBIR awardees will have developed a prototype to demonstrate the advancement of the technology, spearheading the potential for Phase III funding.</p>



<p>Under Phase III, SBIR performers will seek to secure funding from private and/or non-SBIR government sources, with the eventual goal to commercialize and bring to market the technologies from Phases I and II.</p>
<p>The post <a href="https://www.aiuniverse.xyz/dhs-awards-2m-for-small-businesses-to-develop-machine-learning-for-detection-technologies/">DHS Awards $2M for Small Businesses to Develop Machine Learning for Detection Technologies</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/dhs-awards-2m-for-small-businesses-to-develop-machine-learning-for-detection-technologies/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Smart speakers use machine learning for contactless detection of heart rhythm</title>
		<link>https://www.aiuniverse.xyz/smart-speakers-use-machine-learning-for-contactless-detection-of-heart-rhythm/</link>
					<comments>https://www.aiuniverse.xyz/smart-speakers-use-machine-learning-for-contactless-detection-of-heart-rhythm/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 09 Mar 2021 11:51:22 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[contactless]]></category>
		<category><![CDATA[detection]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Smart]]></category>
		<category><![CDATA[speakers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13342</guid>

					<description><![CDATA[<p>Source &#8211; https://eandt.theiet.org/ Researchers have used smart speakers to measure the individual heartbeats of people in the same room without the need for any physical contact. A team from the University of Washington found that by sending inaudible sounds from the speaker out into a room, heartbeats can be measured based on the way the <a class="read-more-link" href="https://www.aiuniverse.xyz/smart-speakers-use-machine-learning-for-contactless-detection-of-heart-rhythm/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/smart-speakers-use-machine-learning-for-contactless-detection-of-heart-rhythm/">Smart speakers use machine learning for contactless detection of heart rhythm</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://eandt.theiet.org/</p>



<p>Researchers have used smart speakers to measure the individual heartbeats of people in the same room without the need for any physical contact.</p>



<p>A team from the University of Washington found that by sending inaudible sounds from the speaker out into a room, heartbeats can be measured based on the way the sounds are reflected back to the speaker.</p>



<p>As&nbsp;the heartbeat is such a tiny motion on the chest surface, machine learning was used to help the smart speaker locate signals from both regular and irregular heartbeats.</p>



<p>When the system was tested on healthy participants and hospitalised cardiac patients, the smart speaker detected heartbeats that closely matched the beats detected by standard heartbeat monitors.</p>



<p>“Regular heartbeats are easy enough to detect even if the signal is small, because you can look for a periodic pattern in the data,” said co-senior author Shyam Gollakota.</p>



<p>“Irregular heartbeats are really challenging because there is no such pattern. I wasn’t sure that it would be possible to detect them, so I was pleasantly surprised that our algorithms could identify irregular heartbeats during tests with cardiac patients.”</p>



<p>While many people are familiar with the concept of a heart rate, doctors are more interested in the assessment of heart rhythm. Heart rate is the average of heartbeats over time, whereas a heart rhythm describes the pattern of heartbeats.</p>



<p>For example, if a person has a heart rate of 60 beats per minute, they could have a regular heart rhythm &#8211; one beat every second &#8211; or an irregular heart rhythm, with beats randomly scattered across that minute but still averaging out to 60 beats per minute.</p>



<p>“Heart rhythm disorders are actually more common than some other well-known heart conditions. Cardiac arrhythmias can cause major morbidities such as strokes, but can be highly unpredictable in occurrence and thus difficult to diagnose,” said researcher Dr. Arun Sridhar.</p>



<p>“Availability of a low-cost test that can be performed frequently and at the convenience of home can be a game-changer for certain patients in terms of early diagnosis and management.”</p>



<p>The key to assessing heart rhythm lies in identifying the individual heartbeats. For this system, the search for heartbeats begins when a person sits within one or two feet of the smart speaker.</p>



<p>The system then plays an inaudible continuous sound, which bounces off the person and returns to the speaker. Based on how the returned sound has changed the system can isolate movements on the person, including the rise and fall of their chest as they breathe.</p>



<p>This algorithm combines signals from all of the smart speaker’s multiple microphones to identify the elusive heartbeat signal while isolating it from other factors such as the person’s breathing.</p>



<p>“This is similar to how Alexa can always find my voice even if I’m playing a video or if there are multiple people talking in the room,” Gollakota said. “When I say, ‘Hey, Alexa,’ the microphones are working together to find me in the room and listen to what I say next. That’s basically what’s happening here, but with the heartbeat.”</p>



<p>Currently the system is set up for spot checks: if&nbsp;a person is concerned about their heart rhythm, they can sit in front of a smart speaker to get a reading.</p>



<p>The research team hopes that future versions could continuously monitor heartbeats while people are asleep, something that could help doctors diagnose conditions such as sleep apnea.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/smart-speakers-use-machine-learning-for-contactless-detection-of-heart-rhythm/">Smart speakers use machine learning for contactless detection of heart rhythm</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/smart-speakers-use-machine-learning-for-contactless-detection-of-heart-rhythm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning-Based Cough Recognition Model Helps Detect Location of Coughing Sounds in Real Time</title>
		<link>https://www.aiuniverse.xyz/deep-learning-based-cough-recognition-model-helps-detect-location-of-coughing-sounds-in-real-time/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-based-cough-recognition-model-helps-detect-location-of-coughing-sounds-in-real-time/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 13 Aug 2020 06:39:33 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[coronavirus]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[detection]]></category>
		<category><![CDATA[Disease]]></category>
		<category><![CDATA[early detection]]></category>
		<category><![CDATA[ENGINEERING]]></category>
		<category><![CDATA[Environment]]></category>
		<category><![CDATA[hospital]]></category>
		<category><![CDATA[pilot]]></category>
		<category><![CDATA[Professor]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10852</guid>

					<description><![CDATA[<p>Source: miragenews.com The Center for Noise and Vibration Control at KAIST announced that their coughing detection camera recognizes where coughing happens, visualizing the locations. The resulting cough recognition camera can track and record information about the person who coughed, their location, and the number of coughs on a real-time basis. Professor Yong-Hwa Park from the <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-based-cough-recognition-model-helps-detect-location-of-coughing-sounds-in-real-time/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-based-cough-recognition-model-helps-detect-location-of-coughing-sounds-in-real-time/">Deep Learning-Based Cough Recognition Model Helps Detect Location of Coughing Sounds in Real Time</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: miragenews.com</p>



<p>The Center for Noise and Vibration Control at KAIST announced that their coughing detection camera recognizes where coughing happens, visualizing the locations. The resulting cough recognition camera can track and record information about the person who coughed, their location, and the number of coughs on a real-time basis.</p>



<p>Professor Yong-Hwa Park from the Department of Mechanical Engineering developed a deep learning-based cough recognition model to classify a coughing sound in real time. The coughing event classification model is combined with a sound camera that visualizes their locations in public places. The research team said they achieved a best test accuracy of 87.4 %.</p>



<p>Professor Park said that it will be useful medical equipment during epidemics in public places such as schools, offices, and restaurants, and to constantly monitor patients’ conditions in a hospital room.</p>



<p>Fever and coughing are the most relevant respiratory disease symptoms, among which fever can be recognized remotely using thermal cameras. This new technology is expected to be very helpful for detecting epidemic transmissions in a non-contact way. The cough event classification model is combined with a sound camera that visualizes the cough event and indicates the location in the video image.</p>



<p>To develop a cough recognition model, a supervised learning was conducted with a convolutional neural network (CNN). The model performs binary classification with an input of a one-second sound profile feature, generating output to be either a cough event or something else.<ins><ins></ins></ins></p>



<p>In the training and evaluation, various datasets were collected from Audioset, DEMAND, ETSI, and TIMIT. Coughing and others sounds were extracted from Audioset, and the rest of the datasets were used as background noises for data augmentation so that this model could be generalized for various background noises in public places.</p>



<p>The dataset was augmented by mixing coughing sounds and other sounds from Audioset and background noises with the ratio of 0.15 to 0.75, then the overall volume was adjusted to 0.25 to 1.0 times to generalize the model for various distances.</p>



<p>The training and evaluation datasets were constructed by dividing the augmented dataset by 9:1, and the test dataset was recorded separately in a real office environment.</p>



<p>In the optimization procedure of the network model, training was conducted with various combinations of five acoustic features including spectrogram, Mel-scaled spectrogram and Mel-frequency cepstrum coefficients with seven optimizers. The performance of each combination was compared with the test dataset. The best test accuracy of 87.4% was achieved with Mel-scaled Spectrogram as the acoustic feature and ASGD as the optimizer.</p>



<p>The trained cough recognition model was combined with a sound camera. The sound camera is composed of a microphone array and a camera module. A beamforming process is applied to a collected set of acoustic data to find out the direction of incoming sound source. The integrated cough recognition model determines whether the sound is cough or not. If it is, the location of cough is visualized as a contour image with a ‘cough’ label at the location of the coughing sound source in a video image.<ins><ins></ins></ins></p>



<p>A pilot test of the cough recognition camera in an office environment shows that it successfully distinguishes cough events and other events even in a noisy environment. In addition, it can track the location of the person who coughed and count the number of coughs in real time. The performance will be improved further with additional training data obtained from other real environments such as hospitals and classrooms.</p>



<p>Professor Park said, “In a pandemic situation like we are experiencing with COVID-19, a cough detection camera can contribute to the prevention and early detection of epidemics in public places. Especially when applied to a hospital room, the patient’s condition can be tracked 24 hours a day and support more accurate diagnoses while reducing the effort of the medical staff.”</p>



<p>This study was conducted in collaboration with SM Instruments Inc.</p>



<p>/Public Release. The material in this public release comes from the originating organization and may be of a point-in-time nature, edited for clarity, style and length. View in full here.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-based-cough-recognition-model-helps-detect-location-of-coughing-sounds-in-real-time/">Deep Learning-Based Cough Recognition Model Helps Detect Location of Coughing Sounds in Real Time</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-based-cough-recognition-model-helps-detect-location-of-coughing-sounds-in-real-time/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
