<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UNRAVELLING Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/unravelling/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/unravelling/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 18 Jun 2021 05:33:31 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>INNOVATIONS: TOP IITS UNRAVELLING ARTIFICIAL INTELLIGENCE INITIATIVES</title>
		<link>https://www.aiuniverse.xyz/innovations-top-iits-unravelling-artificial-intelligence-initiatives/</link>
					<comments>https://www.aiuniverse.xyz/innovations-top-iits-unravelling-artificial-intelligence-initiatives/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 18 Jun 2021 05:33:28 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[IITS]]></category>
		<category><![CDATA[INITIATIVES]]></category>
		<category><![CDATA[Innovations]]></category>
		<category><![CDATA[UNRAVELLING]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14381</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Here is a list of top artificial intelligence initiatives unleashed by the Indian Institute of Technology. A career in technology should be taught from <a class="read-more-link" href="https://www.aiuniverse.xyz/innovations-top-iits-unravelling-artificial-intelligence-initiatives/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/innovations-top-iits-unravelling-artificial-intelligence-initiatives/">INNOVATIONS: TOP IITS UNRAVELLING ARTIFICIAL INTELLIGENCE INITIATIVES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">Here is a list of top artificial intelligence initiatives unleashed by the Indian Institute of Technology.</h2>



<p>A career in technology should be taught from the lower level of education. Once the students feel attracted to the influence of artificial intelligence and its innovations, they take up tech-based courses to quest their thirst. That is what Indian Institute of Technology colleges in India do. First set up in 1951 in Kharagpur, IITs are now located at 23 places across the country. Indian Institute of Technology colleges are educating students with practical knowledge by taking up artificial intelligence initiatives. The professors and research candidates in IITs often work with students on artificial intelligence-powered techniques and unravel a new range of products for the societal cause. In this article, Analytics Insight has listed the top artificial intelligence initiatives unleashed by the Indian Institute of Technology.</p>



<ul class="wp-block-list"><li>TOP NITS AND IITS OFFERING ARTIFICIAL INTELLIGENCE COURSES IN 2021</li><li>IIT MADRAS’ INITIATIVES ON ARTIFICIAL INTELLIGENCE</li><li>WIZARDS OF TECH: TOP ARTIFICIAL INTELLIGENCE PROFESSORS IN INDIA</li></ul>



<h4 class="wp-block-heading"><strong>IIT Madras’ unleashes an artificial intelligence model to process texts</strong></h4>



<p>Indian Institute of Technology Madras has joined hands with AI4Bharat and unraveled artificial intelligence models and datasets to process texts in eleven Indian regional languages. AI4Bharat is a platform helping India to leverage innovative AI solutions to solve societal issues. The dataset includes the country’s official languages including Tamil, Telugu, Hindi, Malayalam, Kannada, etc. The multilingual AI models and datasets will provide the essential building blocks to students, faculty, start-ups, and industry to work on Indian language tools and push the frontiers of technology.</p>



<h4 class="wp-block-heading"><strong>IIT Bombay develops an AI platform for video surveillance</strong></h4>



<p>IIT Bombay has extended its innovation of a state-of-the-art video surveillance platform to remotely monitor social distancing norm violations during the Covid-19 pandemic. Initially designed in 2017, the platform Surakshavyuh was used for military surveillance. Later, the product evolved into enterprise-grade video analytics solutions based on machine learning-enabled technology that can detect illegal entry and loitering, monitor perimeters and track objects, count crowds and recognize faces, etc. The state-of-the-art video surveillance was developed in industrial collaboration between the National Center of Excellence in Technology (NCETIS) at IITB and SrivisifAI Technologies Pvt Ltd.</p>



<h4 class="wp-block-heading"><strong>IIT Kharagpur unravels an AI tool to detect the quality of goods</strong></h4>



<p>The researchers at the Indian Institute of Technology Kharagpur have developed a portable artificial intelligence-powered tool to automatically inspect goods manufactured by Indian micro, small, and medium enterprises (MSMEs). Most of the MSME companies in India rely on manual labor to check the quality of products. However, this innovation will change the tailwind of how small companies function. The AI-powered device clicks pictures of products when aligned on a batch of goods and sends the image feed to AI-based software for quality control analysis. By doing so, the AI software can detect and reject products that come with low quality.</p>



<h4 class="wp-block-heading"><strong>IIT Madras develops an AI platform to solve complex engineering problems</strong></h4>



<p>A group of experts from the Indian Institute of Technology Madras has worked together and developed AI algorithms that enable novel applications using artificial intelligence, machine learning, and deep learning models, which could help solve complex engineering problems. Led by Dr. Vishal Nandigana, an assistant professor at the department of mechanical engineering, the team worked on AI and deep learning algorithm that has the potential to solve a wide variety of problems in engineering fields including thermal management, semiconductors, automobile, electronics, and aerospace.</p>



<h4 class="wp-block-heading"><strong>IIT Madras launches an AI model to convert brain signals into the English language</strong></h4>



<p>A research team from the Indian Institute of Technology Madras has worked closely with the brain-machine interface and unraveled an AI model that could convert brain signals of speech-impaired persons into language. Besides helping impaired people, the technology can also be implied to interpret nature’s signals like plant photosynthesis process or their response to any external stimulus. The AI model decodes the waveforms of the brain using physical law and mathematical transforms such as Fourier Transform or Laplace transform. By collaborating mathematical and science laws, the model can interpret brain signals.</p>



<h4 class="wp-block-heading"><strong>IIT Kharagpur unravels an AI-driven gadget to keep the intruders at bay</strong></h4>



<p>In order to combat the increasing threat of animals in the agricultural fields in rural and semi-reserved areas and the menace of robbers in a residential environment, the Indian Institute of Technology Kharagpur has developed an artificial intelligence-powered gadget that could keep the threats at bay. Farm Surveillance-Cum-Animal Scarer (FSCAS), the artificial intelligence-driven system, can be installed at a strategic place on the field or residential area. It starts screaming and sending alarms to the users’ mobile if it detects an animal or an intruder.</p>



<h4 class="wp-block-heading"><strong>IIT Mandi develops an AI-powered app to track people on home quarantine</strong></h4>



<p>Indian Institute of Technology Mandi has developed an artificial intelligence-powered biometric application to monitor and accurately detect the identity and location of the home-quarantined Covid-19 patients. Lakshman Rekha, the AI-based mobile application, uses a combination of biometric verification, geofencing, and artificial intelligence to detect if the patients breach their quarantine space or not.</p>
<p>The post <a href="https://www.aiuniverse.xyz/innovations-top-iits-unravelling-artificial-intelligence-initiatives/">INNOVATIONS: TOP IITS UNRAVELLING ARTIFICIAL INTELLIGENCE INITIATIVES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/innovations-top-iits-unravelling-artificial-intelligence-initiatives/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>UNRAVELLING A NEW ALGORITHM CAPABLE OF REDUCING THE COMPLEXITY OF DATA</title>
		<link>https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/</link>
					<comments>https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Mar 2021 06:23:03 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[CAPABLE]]></category>
		<category><![CDATA[complexity]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[REDUCING]]></category>
		<category><![CDATA[UNRAVELLING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13588</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ The new algorithm is an effective machine learning tool that is capable of extracting the desired information Big data, evidently, is too large to <a class="read-more-link" href="https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/">UNRAVELLING A NEW ALGORITHM CAPABLE OF REDUCING THE COMPLEXITY OF DATA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">The new algorithm is an effective machine learning tool that is capable of extracting the desired information</h2>



<p>Big data, evidently, is too large to be processed using conventional data processing tools and techniques. Majority of the information systems produce data in huge quantities that poses difficulties to measure. This complex Big data that organizations have to deal with are characterized by – huge volume, high value, big variability, high velocity, much variety, and low veracity.</p>



<p>Yet another area that generates huge amount of data is the one involving scientific experiments. As days passed by, researchers have come up with highly efficient ways to plan, conduct, and assess research. A combination of computational, algorithmic, statistical and mathematical techniques is what goes behind these scientific experiments. Also, whenever a scientific experiment is conducted, the results obtained are usually transformed into numbers. All this ultimately results in huge datasets. Such big data isn’t that easy to handle and extracting meaningful insights from the same is a trickier task. This is why every possible method to reduce the size of the data is being employed and tested. Today, different types of algorithms are being employed to reduce the data size and also pave the way for extracting the principal features and insights. All this ultimately throwing light on the most critical part of the data, its statistical properties. On the downside, the fact that certain algorithms cannot be applied directly to these large volumes of big data cannot be overlooked.</p>



<p>With many researchers and programmers coming up with ways to deal with this humungous big data in the most optimal manner, Reza Oftadeh, a doctoral student in the Department of Computer Science and Engineering at Texas A&amp;M University, too took a step towards this. Reza developed an algorithm which, according to him, is an effective machine learning tool as it is capable of extracting the desired information. Reza along with his team, which comprises of a couple of other doctoral students and some assistant professors, have published their research work in the proceedings from the 2020 International Conference on Machine learning. This research by Reza and his team was funded by the National Science Foundation and U.S. Army Research Office Young Investigator Award.</p>



<p>There is a fair chance that the data set in consideration has high dimensionality, meaning that it has a lot of features. The problem associated with this is the ability to generalize. This is why efforts from every corner are put in to reduce the dimensionality of the data. With those areas being identified which need to undergo reduction in dimensionality, annotated samples of the same are made to make it easy for further analysis. Well, not just this, tasks such as classification, visualization, modelling, etc. also see a smooth workflow.</p>



<p>Though this isn’t for the first time that such algorithms and methodologies have been put in place. This has been doing rounds for quite some time now but with big data increasing exponentially, analysing it is not just time consuming but also complicated. This led to the invention of ANNs – Artificial Neural Networks. Artificial Neural Networks are one of the greatest innovations that the world has seen on the technical front. Artificial neural networks are made up of billions of artificial neurons. Their task is to extract meaningful information from the dataset provided. In simple terms, Artificial Neural Networks are models that are equipped with a well-defined architecture of many interconnected artificial neurons and are designed to simulate how the human brain works when it comes to analysing and processing data. Artificial Neural Networks have seen numerous applications so far and that one application which sets it apart is the way it is capable enough of classifying big data into different categories based on its features.</p>



<p>When Reza was asked his views on the same, he started off by mentioning how much we rely on ANNs in our day-to-day life. He quoted the examples of Alexa, Siri and Google Translate saying how they are trained to be able to understand what the person is saying. However, he also mentioned how all the features possessed aren’t equally significant. He supported his statement by giving an example of a specific type of ANN called an “autoencoder”. This cannot tell where the features are located and also which features are more critical than the rest, he added. Running the model repeatedly doesn’t serve the purpose as this too, is time consuming.</p>



<p>Reza and his team aim to come take their algorithm to a next level altogether. They plan on to add a new cost function to the network. With this feature, it is possible to provide the exact location of the features. For this, they incorporated an OCR – Optical Character Recognition experiment. This team of researchers trained their machine learning model to convert images of both typed as well as handwritten text into machine encoded text. They made use of digital physical documents for this experiment. This model, on being trained for OCR, holds the potential to tell which features among all are important and must be put into priority. They claim that their machine learning tool would cater to bigger datasets as well, thereby resulting in an improved data analysis.</p>



<p>As of now, the algorithm that this group of researchers have come up with stands the potential to deal with one-dimensional data samples only. However, the team is willing to extend its capabilities to the extent that it will be possible to deal with even more complex unstructured data. The team is ready to face all the challenges that might come their way and explore this algorithm to the farthest level possible. They would also be working in the area of generalizing their method. The reason for doing this is to provide a unified framework to produce other machine learning methods. Ultimately, the objective that still remains is to extract features by dealing with a smaller set of specifications.</p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/">UNRAVELLING A NEW ALGORITHM CAPABLE OF REDUCING THE COMPLEXITY OF DATA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>UNRAVELLING TRANSFER LEARNING TO MAKE MACHINES MORE ADVANCED</title>
		<link>https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/</link>
					<comments>https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 23 Feb 2021 10:33:06 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[advanced]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[transfer]]></category>
		<category><![CDATA[UNRAVELLING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13028</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Researchers have embraced transfer learning to address algorithm challenges Advanced machines never fail to leave men in awe. But only researchers who worked behind the <a class="read-more-link" href="https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/">UNRAVELLING TRANSFER LEARNING TO MAKE MACHINES MORE ADVANCED</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h1 class="wp-block-heading">Researchers have embraced transfer learning to address algorithm challenges</h1>



<p>Advanced machines never fail to leave men in awe. But only researchers who worked behind the machines know how much time, cost and data it took to become a stage stealer. Training an algorithm that employs various features in a machine is quite nerve-wracking. But tech geeks have found a solution using transfer learning. Besides, companies are also unveiling a mixture of technologies like deep learning neural networks and machine learning to come up with futuristic machines.</p>



<p>We are often surrounded by the myth that number-crunching gets cheaper all the time. According to Moore’s law, the number of components that can be squeezed onto a microchip of a given size can double every two years with the amount of computational power available at a given cost. This idea might suggest the opinion that the cost of training a machine is falling. But that is not true. Just because data is everywhere and is easily available doesn’t mean they are open to use and inexpensive in any way. Even when the data is open for accessibility, training an algorithm takes much more effort than any other computational process. Industry analysts anticipate that worldwide spending on artificial intelligence will reach US$100 billion in 2024, double of what it is today.</p>



<p>The advantage of machine learning and artificial intelligence algorithm is that they can easily understand information, act and interact with our environment in the most natural and human way possible. But the performance of the models depends highly on the calculation power allocated, and the quantity and quality of data. A study conducted by Dimensional Research unravels that around 96% of organizations run into a problem with training data quality and quantity. Besides, the study also claims that most machine learning model projects require more than 100,000 data samples to perform effectively. A machine learning system is still programmed with standard one-and-zero logic, but it can modify its behavior to meet specialized goals based on patterns it discovers in the sample data. Henceforth, machine learning algorithm needs to be trained with good data, which means data is optimized according to the issue you are dealing with. Fortunately, transfer learning can help as it takes knowledge gained from a pre-trained model that was used to solve a specific task and applies it to a different, but a similar problem within the same domain. Additionally, a mixed array of technologies like deep learning neural networks and machine learning are also making the training process less burdening.</p>



<h3 class="wp-block-heading"><strong>Transfer learning addresses algorithm challenges</strong></h3>



<p>Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. The technology is seen as a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks, given the vast compute and time resources required to develop neural network models on these problems and from the huge jumps in a skill that they provide on related problems.</p>



<p>Remarkably, with the help of transfer learning, instead of starting the learning process from scratch, you start from patterns that have been learned when solving a different problem. This way, you leverage previous learning and avoid starting from nothing. Transfer learning is usually expressed through the use of pre-trained models that were trained on a large dataset to solve a problem similar to the one that we want to solve. One of the well-known examples of transfer learning is GPT-3, the largest natural language machine learning model ever built. GPT-3 is a language prediction model where an algorithm structure is designed to take one piece of language and transform it into what it predicts is the most useful following piece of language for the user. Behind the mechanism are machine learning, deep learning and transfer learning technologies that help the model to produce humanlike predictive text.</p>



<p>Other than this, big tech conglomerates like Microsoft, AWS, NVIDIA, IBM, etc. have leveraged the help of transfer learning toolkits to remove the burden of building models from scratch, address the data quality and quantity challenges and expedite production machine learning.</p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/">UNRAVELLING TRANSFER LEARNING TO MAKE MACHINES MORE ADVANCED</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
