<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI models Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ai-models/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ai-models/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 10 Oct 2020 06:09:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</title>
		<link>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/</link>
					<comments>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 10 Oct 2020 06:09:52 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Developing]]></category>
		<category><![CDATA[Neural modules]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12093</guid>

					<description><![CDATA[<p>Source: marktechpost.com NVIDIA’s open-source toolkit, NVIDIA NeMo( Neural Models), is a revolutionary step towards the advancement of Conversational AI. Based on PyTorch, it allows one to build <a class="read-more-link" href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: marktechpost.com</p>



<p>NVIDIA’s open-source toolkit, NVIDIA NeMo( Neural Models), is a revolutionary step towards the advancement of Conversational AI. Based on PyTorch, it allows one to build quickly, train, and fine-tune conversational AI models.</p>



<p>As the world is getting more digital, Conversational AI is a way to enable communication between humans and computers. The set of technologies behind some fascinating technologies like automated messaging, speech recognition, voice chatbots, text to speech, etc. It broadly comprises three areas of AI research: automatic speech recognition (ASR), natural language processing (NLP), and speech synthesis (or text-to-speech, TTS). </p>



<p>Conversational AI has shaped the path of human-computer interaction, making it more accessible and exciting. The latest advancements in Conversational AI like NVIDIA NeMo help bridge the gap between machines and humans.</p>



<p>NVIDIA NeMo consists of two subparts: NeMo Core and NeMo Collections. NeMo Core deals with all models generally, whereas NeMo Collections deals with models’ specific domains. In Nemo’s Speech collection (nemo_asr), you’ll find models and various building blocks for speech recognition, command recognition, speaker identification, speaker verification, and voice activity detection. NeMo’s NLP collection (nemo_nlp) contains models for tasks such as question answering, punctuation, named entity recognition, and many others. Finally, in NeMo’s Speech Synthesis (nemo_tts), you’ll find several spectrogram generators and vocoders, which will let you generate synthetic speech.</p>



<p>There are three main concepts in NeMo: model, neural module, and neural type.&nbsp;</p>



<ul class="wp-block-list"><li><strong>Models</strong>&nbsp;contain all the necessary information regarding training, fine-tuning, neural network implementation, tokenization, data augmentation, infrastructure details like the number of GPU nodes,etc., optimization algorithm, etc.</li><li><strong>Neural modules</strong>&nbsp;are a sort of encoder-decoder architecture consisting of conceptual building blocks responsible for different tasks. It represents the logical part of a neural network and forms the basis for describing the model and its training process. Collections have many neural modules that can be reused whenever required.</li><li>Inputs and outputs to Neural Modules are typed with&nbsp;<strong>Neural Types</strong>. A Neural Type is a pair that contains the information about the tensor’s axes layout and semantics of its elements. Every Neural Module has input_types and output_types properties that describe what kinds of inputs this module accepts and what types of outputs it returns.</li></ul>



<p>Even though NeMo is based on PyTorch, it can also be effectively used with other projects like PyTorch Lightning and Hydra. Integration with Lightning makes it easier to train models with mixed precision using Tensor Cores and can scale training to multiple GPUs and compute nodes. It also has some features like logging, checkpointing, overfit checking, etc. Hydra also allows the parametrization of scripts to keep it well organized. It makes it easier to streamline everyday tasks for users.</p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Peeking Inside the Black Box: Techniques for Making AI Models More Easily Interpretable</title>
		<link>https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/</link>
					<comments>https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 19 May 2020 06:51:38 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Explainability]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Artifical intelligence]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8866</guid>

					<description><![CDATA[<p>Source: rtinsights.com When training a machine learning or AI model, typically the main goal is to make the most accurate prediction possible. Data scientists and machine learning <a class="read-more-link" href="https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/">Peeking Inside the Black Box: Techniques for Making AI Models More Easily Interpretable</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: rtinsights.com</p>



<p>When training a machine learning or AI model, typically the main goal is to make the most accurate prediction possible. Data scientists and machine learning engineers will transform their data in myriad ways and tweak algorithms in any way possible to bring that accuracy score as close to 100 percent as possible, which can unintentionally lead to a model that is difficult to interpret or creates ethical quandaries.</p>



<p>Considering the increasing awareness and consequences of faulty AI, explainable AI is going to be “one of the seminal issues that’s going to be facing data science over the next ten years,” Josh Poduska, Chief Data Scientist at Domino Data Lab noted during his talk at the recent virtual Open Data Science Conference (ODSC) East.</p>



<p><strong>What is Explainable AI?</strong></p>



<p>Explainable AI, or xAI, is the concept of understanding what is happening “under the hood” of AI models and not just taking the most accurate model and blindly trusting its results.</p>



<p>It is important because machine learning models, and in particular neural networks, have a reputation for being “black boxes,” where we do not really know how the algorithm came up with its prediction. All we know is how well it performed.</p>



<p>Models that are not easily explainable or interpretable can lead to some of the following problems:</p>



<ul class="wp-block-list"><li>Models that are not understood by the end user could be used inappropriately or, in fact, could be wrong altogether.</li><li>Ethical issues that arise in models that have some bias towards or against certain groups of people.</li><li>Customers may require models that are interpretable, otherwise they may not end up using them at all.</li></ul>



<p>Furthermore, there are recent regulations, and potentially new ones in the future, that may require models, at least in certain contexts, to be explainable. As Poduska explains, GDPR gives customers the right to understand why a model gave a certain outcome. For example, if a banking customer’s loan application was rejected, that customer has a right to know what contributed to this model result.</p>



<p>So, how do we address these issues and create AI models that are more easily interpretable? The first issue is to understand how one wants to apply the model. Poduska explains that there is a balance between “global” versus “local” explainability.</p>



<p>Global interpretability refers to understanding generally the resulting predictions from different examples that you feed your model. In other words, if an online store is trying to predict who will buy a certain item, a model may find that people within a certain age range who have bought a similar item in the past will purchase that item.</p>



<p>In the case of local interpretability, one is trying to understand how the model came up with its result for one particular input example. In other words, how much does age versus purchase history affect the prediction of one person’s future buying habits?</p>



<h3 class="wp-block-heading"><strong>Techniques for Understanding AI Reasoning</strong></h3>



<p>One standard option that has been around for a while is the concept of feature importance, which is often examined in training decision tree models, such as a random forest. However, there are issues with this method.</p>



<p>A more sophisticated option is called SHAP (SHapley Additive exPlanations). The basic idea behind this option is to hold one input feature of the model constant and randomize the other features, in order to estimate how that feature contributes to the prediction. The downside here is that this method can be very computationally expensive, especially for models with a large number of input features.</p>



<p>For understanding a model on a local level, LIME (Local Interpretable Model-agnostic Explanations) builds a simpler, linear model around each prediction of the original model in order to understand an individual prediction. This method is much faster, computationally, than SHAP, but is focused on local interpretability.</p>



<p>Going even further than the above solutions, some designers of machine learning algorithms are starting to reconstruct the underlying mathematics of these algorithms in order to give better interpretability and high accuracy simultaneously. One such algorithm is AddTree.</p>



<p>When training an AddTree model, one of the hyperparameters of the model is how interpretable the model should be. Depending on how this hyperparameter is set, the AddTree algorithm will train a decision tree model that is either weighted toward better explainability or toward higher accuracy.</p>



<p>For deep neural networks, two options are TCAV and Interpretable CNNs. TCAV (Testing with Concept Activation Vectors) is focused on global interpretability, in particular showing how important different everyday concepts are for making different predictions. For example, how important is color in predicting whether an image is a cat or not?</p>



<p>Interpretable CNNs is a modification of Convolutional Neural Networks where the algorithm automatically forces each filter to represent a distinct part of an object in an image. For example, when training on images of a cat, a standard CNN may have a layer that includes different parts of a cat, whereas the Interpretable CNN has a layer that identifies just a cat’s head.</p>



<p>If your goal is to be able to better understand and explain an existing model, techniques like SHAP and LIME are good options. However, as the demands for more explainable AI continue to increase, even more models will be built in the coming years that have interpretability baked into the algorithm itself, Poduska predicts.</p>



<p>Poduska has a preview of some of these techniques here. These new algorithms will make it easier for all machine learning practitioners to produce explainable models that will hopefully make businesses, customers, and governments more comfortable with the ever-increasing reach of AI.</p>
<p>The post <a href="https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/">Peeking Inside the Black Box: Techniques for Making AI Models More Easily Interpretable</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google releases API to train smaller, faster AI models</title>
		<link>https://www.aiuniverse.xyz/google-releases-api-to-train-smaller-faster-ai-models/</link>
					<comments>https://www.aiuniverse.xyz/google-releases-api-to-train-smaller-faster-ai-models/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 09 Apr 2020 08:30:49 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Google]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8070</guid>

					<description><![CDATA[<p>Source: venturebeat.com Google today released Quantization Aware Training (QAT) API, which enables developers to train and deploy models with the performance benefits of quantization — the process of mapping <a class="read-more-link" href="https://www.aiuniverse.xyz/google-releases-api-to-train-smaller-faster-ai-models/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-releases-api-to-train-smaller-faster-ai-models/">Google releases API to train smaller, faster AI models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Google today released Quantization Aware Training (QAT) API, which enables developers to train and deploy models with the performance benefits of quantization — the process of mapping input values from a large set to output values in a smaller set — while retaining close to their original accuracy. The goal is to support the development of smaller, faster, and more efficient machine learning models well-suited to run on off-the-shelf machines, such as those in medium- and small-business environments where computation resources are at a premium.</p>



<p>Often, the process of going from a higher to lower precision is noisy. That’s because quantization squeezes a small range of floating-point values into a fixed number of information buckets, leading to information loss similar to rounding errors when fractional values are represented as integers. (For example, all values in range [2.0, 2.3] might be represented in a single bucket.) Problematically, when the lossy numbers are used in several computations, the losses accumulate and need to be rescaled for the next computation.</p>



<p>The QAT API solves this by simulating low-precision computation during the AI model training process. Quantization error is introduced as noise throughout the training, which QAT API’s algorithm tries to minimize so that it learns variables that are more robust to quantization. A training graph leverages operations that convert floating-point objects into low-precision values and then convert low-precision values back into floating-point, ensuring that quantization losses are introduced in the computation and that further computations emulate low-precision.</p>



<p>In tests, Google reports that an image classification model (MobilenetV1 224) with a non-quantized accuracy of 71.03% achieved 71.06% accuracy after quantization when tested on the open source Imagenet data set. Another classification model (Nasnet-Mobile) tested against the same data set only experienced a 1% loss in accuracy (74% to 73%) post-quantization.</p>



<p>Aside from emulating the reduced precision computation, QAT API is responsible for recording the statistics necessary to quantize a trained model or parts of it. This enables developers to convert a model trained with the API to a quantized integer-only TensorFlow Lite model, for example, or to experiment with various quantization strategies while simulating how quantization affects accuracy for different hardware backends.</p>



<p>Google says that by default, QAT API — which is a part of the TensorFlow Model Optimization Toolkit — is configured to work with the quantized execution support available in TensorFlow Lite, Google’s toolset designed to adapt models architected on its TensorFlow machine learning framework to mobile, embedded, and internet of things devices. “We are very excited to see how the QAT API further enables TensorFlow users to push the boundaries of efficient execution in their TensorFlow Lite-powered products as well as how it opens the door to researching new quantization algorithms and further developing new hardware platforms with different levels of precision,” wrote Google in a blog post.</p>



<p>The formal launch of the QAT API comes after the unveiling of TensorFlow Quantum, a machine learning framework for training quantum models, at the TensorFlow Dev Summit. The QAT API was previewed during a recorded session at the conference.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-releases-api-to-train-smaller-faster-ai-models/">Google releases API to train smaller, faster AI models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-releases-api-to-train-smaller-faster-ai-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google’s new SEED RL framework reduces AI model training costs by 80%</title>
		<link>https://www.aiuniverse.xyz/googles-new-seed-rl-framework-reduces-ai-model-training-costs-by-80/</link>
					<comments>https://www.aiuniverse.xyz/googles-new-seed-rl-framework-reduces-ai-model-training-costs-by-80/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 26 Mar 2020 07:49:33 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[framework]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[SEED]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7740</guid>

					<description><![CDATA[<p>Source: siliconangle.com Researchers at Google have open-sourced a new framework that can scale up artificial intelligence model training across thousands of machines. It’s a promising development because it should <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-new-seed-rl-framework-reduces-ai-model-training-costs-by-80/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-new-seed-rl-framework-reduces-ai-model-training-costs-by-80/">Google’s new SEED RL framework reduces AI model training costs by 80%</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: siliconangle.com</p>



<p>Researchers at Google have open-sourced a new framework that can scale up artificial intelligence model training across thousands of machines.</p>



<p>It’s a promising development because it should enable AI algorithm training to be performed at millions of frames per second while reducing the costs of doing so by as much as 80%, Google noted in a research paper.</p>



<p>That kind of reduction could help to level the playing field a bit for startups that previously haven’t been able to compete with major players such as Google in AI. Indeed, the cost of training sophisticated machine learning models in the cloud is surprisingly expensive.</p>



<p>One recent report by Synced found that the University of Washington racked up $25,000 in costs to train its Grover model, which is used to detect and generate fake news. Meanwhile, OpenAI paid $256 per hour to train its GPT-2 language model, while Google itself spend around $6,912 to train its BERT model for natural language processing tasks.</p>



<p>SEED RL is built atop of the TensorFlow 2.0 framework and works by leveraging a combination of graphics processing units and tensor processing units to centralize model inference. The inference is then performed centrally using a learner component that trains the model.</p>



<p>The target model’s variables and state information are kept local, and observations on them are sent to the learner at every step of the process. SEED RL also uses a network library based on the open-source universal RPC framework to minimize latency.</p>



<p>Google’s researchers said the learner component of SEED RL can be scaled across thousands of cores, while the number of actors that iterate between taking steps in the environment and running inference on the model to predict the next action, can scale to thousands of machines.</p>



<p>Google evaluated SEED RL’s efficiency by benchmarking it on the popular Arcade Learning Environment, the Google Research Football environment and several DeepMind Lab environments. The results show they managed to solve a Google Research Football task while training the model at 2.4 million frames per second using 64 Cloud Tensor Processing Unit chips. That’s around 80 times faster than previous frameworks, Google said.</p>



<p>“This results in a significant speed-up in wall-clock time and, because accelerators are orders of magnitude cheaper per operation than CPUs, the cost of experiments is reduced drastically,” Lasse Espeholt, a research engineer at Google Research in Amsterdam, wrote in the company’s AI blog Monday. “We believe SEED RL, and the results presented, demonstrate that reinforcement learning has once again caught up with the rest of the deep learning field in terms of taking advantage of accelerators.”</p>



<p>Constellation Research Inc. analyst Holger Mueller told SiliconANGLE that SEED RL looks to be another example of “reinforcement learning”, which&nbsp;he said is emerging as one of the most promising AI techniques to advance next generation applications.</p>



<p>“When you tweak software to work well with hardware, you usually see major advances and that is what Google is showing here – the combination of its&nbsp;SEED RL library with its TPU architecture,” Mueller said. “Not surprisingly&nbsp;it provides substantial&nbsp;performance gains over conventional solutions. This makes reinforcement learning available to the masses, although users would be locked into the Google Cloud Platform. But AI is served best in the cloud, and GCP is a very good choice for AI apps.”</p>



<p>Google said the code for SEED RL has been open-sourced and made available on Github, together with examples that show how to run it on Google Cloud with graphics processing units.</p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-new-seed-rl-framework-reduces-ai-model-training-costs-by-80/">Google’s new SEED RL framework reduces AI model training costs by 80%</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-new-seed-rl-framework-reduces-ai-model-training-costs-by-80/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Researchers detail TrojAI, a framework for hardening AI models against adversarial attacks</title>
		<link>https://www.aiuniverse.xyz/researchers-detail-trojai-a-framework-for-hardening-ai-models-against-adversarial-attacks/</link>
					<comments>https://www.aiuniverse.xyz/researchers-detail-trojai-a-framework-for-hardening-ai-models-against-adversarial-attacks/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 18 Mar 2020 06:55:22 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[TrojAI]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7524</guid>

					<description><![CDATA[<p>Source: venturebeat.com One way to test machine learning models for robustness is with what’s called a trojan attack, which involves modifying a model to respond to input <a class="read-more-link" href="https://www.aiuniverse.xyz/researchers-detail-trojai-a-framework-for-hardening-ai-models-against-adversarial-attacks/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/researchers-detail-trojai-a-framework-for-hardening-ai-models-against-adversarial-attacks/">Researchers detail TrojAI, a framework for hardening AI models against adversarial attacks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>One way to test machine learning models for robustness is with what’s called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that it’ll enable researchers to understand the effects of various data set configurations on the generated “trojaned” models, and that it’ll help to comprehensively test new trojan detection methods to harden models.</p>



<p>It’s critical that the AI models enterprises use to make critical decisions are protected against attacks, and this method could help them become more secure.</p>



<p>TrojAI is a set of Python modules that enable researchers to find and generate trojaned AI classification and reinforcement learning models. In the first step — classification — the user configures (1) the type of data poisoning to apply to the dataset of interest, (2) the architecture of the model to be trained, (3) the training parameters of the model, and (4) the number of models to train. The configuration is then ingested by the main program, which generates the desired models. Alternatively, instead of a data set, the user can configure a poisonable environment on which the model will be trained.</p>



<p>A data generation sub-module — datagen — creates a synthetic corpus containing image or text samples while the model generation sub-module — modelgen — trains a set of models that contain a trojan.</p>



<p>TrojAI collects several metrics when training models on the trojaned data sets or environments, including the performance of the trained model on data for all examples in the test data set that don’t have a trigger; the performance of the trained model for examples that <em>have</em> the embedded trigger; and the performance of the model on clean examples of the classes that were triggered during model training. High performance on all three metrics is intended to provide confidence that the model has been successfully trojaned while maintaining high performance on the original data set for which the model was designed.</p>



<p>In the future, the researchers hope to extend the framework to incorporate additional data modalities such as audio as well as tasks like object detection. They also plan to expand on the library of data sets, architectures, and triggered reinforcement learning environments for testing and production of multiple triggered models, and to account for recent advances in trigger embedding methodologies that are designed to evade detection.</p>



<p>The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released a paper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu offers a toolbox — Advbox — for generating adversarial examples that’s able to fool models in frameworks like MxNet, Keras, Facebook’s PyTorch and Caffe2, Google’s TensorFlow, and Baidu’s own PaddlePaddle. And MIT’s Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFooler that generates adversarial text to strengthen natural language models.</p>
<p>The post <a href="https://www.aiuniverse.xyz/researchers-detail-trojai-a-framework-for-hardening-ai-models-against-adversarial-attacks/">Researchers detail TrojAI, a framework for hardening AI models against adversarial attacks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/researchers-detail-trojai-a-framework-for-hardening-ai-models-against-adversarial-attacks/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google launches TensorFlow Quantum that facilitates hybrid AI models</title>
		<link>https://www.aiuniverse.xyz/google-launches-tensorflow-quantum-that-facilitates-hybrid-ai-models/</link>
					<comments>https://www.aiuniverse.xyz/google-launches-tensorflow-quantum-that-facilitates-hybrid-ai-models/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 13 Mar 2020 08:54:35 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[TensorFlow]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7404</guid>

					<description><![CDATA[<p>Source: technowize.com Google announced the launch of TensorFlow Quantum, which is based on Machine learning and quantum computing coming together to build hybrid AI models. The TFQ <a class="read-more-link" href="https://www.aiuniverse.xyz/google-launches-tensorflow-quantum-that-facilitates-hybrid-ai-models/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-tensorflow-quantum-that-facilitates-hybrid-ai-models/">Google launches TensorFlow Quantum that facilitates hybrid AI models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: technowize.com</p>



<p>Google announced the launch of TensorFlow Quantum, which is based on Machine learning and quantum computing coming together to build hybrid AI models.</p>



<p>The TFQ was built in collaboration with the University of Waterloo, X, and Volkswagen. TensorFlow Quantum is an opensource library for rapid prototyping of quantum ML models.</p>



<p>“TensorFlow Quantum will allow quantum data to be used in building hybrid quantum-classical models by providing the necessary ingredients to mix quantum algorithms and logic designed in&nbsp;Cirq&nbsp;with TensorFlow,” explains Tensorflow blog.</p>



<p>In plainer terms, the TFQ will provide the necessary tools to bring together classical computing based on machine learning with quantum computing. It will help build natural or artificial quantum systems by mixing quantum data, which is measured in quantum bits or quibids, and hybrid AI systems such as&nbsp;<strong>Noisy Intermediate Scale Quantum</strong>&nbsp;(NISQ) processors with ~50 &#8211; 100 qubits, says the Google blog on TFQ.</p>



<p>This method is called “hybrid-classical AI modeling,” and enables IT researchers to unravel quantum data to help predict quantum algorithms.</p>



<p>The TFQ library provides basic information for the development of models that disentangle correlations in quantum data, leading to better quantum algorithms and the discovery of new ones too.</p>



<p>The quantum ML model needs to use hybrid quantum-classical models because quantum processors alone are small and noisy and NISQs need classical processors to work effectively. TensorFlow provides the needed heterogeneous computing models and is a natural platform for experimenting with hybrid quantum-classical algorithms.</p>



<p>Quantum computations can be stimulated or real as the TFQ contains basic tools such as qubits, gates, circuits, etc. And specific algorithms for NISQ machines can be designed and customized to needs.</p>



<p>The research paper published by Google and its collaborators Volkswagen, and University of Waterloo outline their work:</p>



<p>We demonstrate how one can apply TFQ to tackle advanced quantum learning tasks, including meta-learning, Hamiltonian learning, and sampling thermal states. We hope this framework provides the necessary tools for the quantum computing and machine learning research communities to explore models of both natural and artificial quantum systems, and ultimately discover new quantum algorithms which could potentially yield a quantum advantage.</p>



<p>“Today, TensorFlow Quantum is primarily geared towards executing quantum circuits on classical-quantum circuit simulators. In the future, TFQ will be able to execute quantum circuits on actual quantum processors that are supported by Cirq, including Google’s own processor Sycamore,&#8221; reads the blog.</p>



<p>Machine learning and Artificial Intelligence have been used in recent years for image processing in cancer detection, weather and natural disaster forecasts like earthquakes and their after-effects, and even in space. It is hoped that the new quantum ML model will lead to further progress in medicine, communications, and explorations of the earth’s resources and space</p>



<p>The TensorFlow Quantum was to be released during the&nbsp;TensorFlow Dev Summit, an annual meeting of machine learning practitioners who use the framework at Google offices in Silicon Valley but that was canceled due to the coronavirus scare.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-tensorflow-quantum-that-facilitates-hybrid-ai-models/">Google launches TensorFlow Quantum that facilitates hybrid AI models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-launches-tensorflow-quantum-that-facilitates-hybrid-ai-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Gains In Artificial Intelligence Help Advisors Serve Clients Better</title>
		<link>https://www.aiuniverse.xyz/gains-in-artificial-intelligence-help-advisors-serve-clients-better/</link>
					<comments>https://www.aiuniverse.xyz/gains-in-artificial-intelligence-help-advisors-serve-clients-better/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 09 Mar 2020 09:23:19 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Gains]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7351</guid>

					<description><![CDATA[<p>Source: investors.com As artificial intelligence changes the investment landscape, advisors need to stay one step ahead. Clients crave a human touch, but they also want to save <a class="read-more-link" href="https://www.aiuniverse.xyz/gains-in-artificial-intelligence-help-advisors-serve-clients-better/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/gains-in-artificial-intelligence-help-advisors-serve-clients-better/">Gains In Artificial Intelligence Help Advisors Serve Clients Better</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: investors.com</p>



<p>As artificial intelligence changes the investment landscape, advisors need to stay one step ahead. Clients crave a human touch, but they also want to save money and improve their returns using new technologies. </p>



<p>Meanwhile, companies across all industries seek to harness AI to boost profits. Using deep learning and neural networks to instantly identify subtle patterns enables tech-savvy organizations to race ahead of the pack, creating opportunities for investors who track firms that are integrating AI into their overall strategy.</p>



<p>For advisors, the challenge is weaving AI into their practice without going overboard. It&#8217;s a delicate balance, especially when curious clients question the role of AI on capital markets and its effect on their investments.</p>



<p>&#8220;AI is just one piece of the puzzle,&#8221; said Anthony Saccaro, an advisor in Woodland Hills, Calif. &#8220;It&#8217;s one tool to draw a conclusion, but it&#8217;s not the conclusion.&#8221;</p>



<p>Reflecting on his two decades in the financial planning business, Saccaro notes how consumers increasingly glean knowledge from their computers and phones. He adds that they sometimes put more faith in AI-generated findings and automated tools than is warranted.</p>



<p>&#8220;With any AI model, there&#8217;s a bias behind it,&#8221; he said. &#8220;I&#8217;ll show a client how that might not make sense in their situation,&#8221; even after they plug their data into some analytic platform that spits out an asset allocation or stock selection strategy.</p>



<h2 class="wp-block-heading">Make Better Decisions</h2>



<p>Consumer awareness of AI is soaring. While consumers may not understand the technical underpinnings of self-driving cars, medical devices and other machine learning-enabled technologies, they expect their advisor to take advantage of cutting-edge advances in the field.</p>



<p>&#8220;AI does come up occasionally in conversation with clients,&#8221; said Vikram Chugh, chief operating officer at Robertson Stephens, a New York City-based wealth management firm. &#8220;Everybody has heard of machine learning and other buzzwords of the day.&#8221;</p>



<p>He finds that most clients aren&#8217;t sure how AI works or what it does. Instead, they ask, &#8220;How can I capitalize on it?&#8221;</p>



<p>Chugh sees two ways that AI affects clients and enhances their experience working with an advisor. First, there are predictive tools that help portfolio managers assess investors&#8217; feelings about risk and their range of reactions to market swings. Such tools can aid advisors in making decisions and tailoring strategies for clients.</p>



<p>Second, he says that by gathering more information from clients, his firm can deliver better, more customized service. Examples include inputting data about a client&#8217;s goals and objectives, household income and other biographical details.</p>



<p>&#8220;Getting this information is a back-and-forth process,&#8221; Chugh said. &#8220;You have to show clients what you&#8217;ll do with this information. If we see how much they earn and spend and who owns a Tesla, that can be incorporated into their investment management.&#8221;</p>



<h2 class="wp-block-heading">Focus On Outcomes</h2>



<p>Chugh is not alone in his quest to put AI to work to improve client service. Other investment management executives want to scale their AI capabilities so that their advisors deliver more value to clients.</p>



<p>&#8220;To me, AI is an enabler of how we can provide more meaningful insights and advice,&#8221; said Hamesh Chawla, chief technology officer at Edelman Financial Engines in Sunnyvale, Calif. For example, advisors can spot what Chawla calls &#8220;life event triggers&#8221; such as a client&#8217;s interest in buying a home, which in turn could spark a conversation about mortgage options.</p>



<p>While clients may express concern to their advisor about cybersecurity and safeguarding their personal data, they are less worried about the role of AI in influencing the relationship. They may have only a vague idea of the practical impact of algorithms or analytics on their portfolio.</p>



<p>&#8220;Clients don&#8217;t have any direct concerns with AI,&#8221; Chawla said. &#8220;Outcomes are what they care about.&#8221;</p>



<p>Similarly, most advisors don&#8217;t feel compelled to master the intricacies of data analytics or machine learning. They simply want to put these tools to work with a minimum of fuss.</p>



<p>&#8220;If we bombard advisors with new technologies, it takes away from their time with clients,&#8221; Chawla said. &#8220;So advisors see the dashboard and a trigger, and that enables them to engage better with clients.&#8221;</p>



<p>Some startups are pinning their hopes on applying AI in novel ways. For instance, Wedmont Private Capital recently launched a tech-enabled alternative to the traditional wealth management model, charging high-net-worth clients a flat fee that includes direct indexing in which their portfolios aim to replicate a market index.</p>



<p>&#8220;It involves us using an AI optimization engine to build custom portfolios that mirror a specific benchmark in the market,&#8221; said James Pelletier, co-founder of West Chester, Pa.-based Wedmont. &#8220;Very rarely do clients ask about AI, but when we explain how the system works, they tend to be pretty comfortable with it.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/gains-in-artificial-intelligence-help-advisors-serve-clients-better/">Gains In Artificial Intelligence Help Advisors Serve Clients Better</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/gains-in-artificial-intelligence-help-advisors-serve-clients-better/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Researchers find way to boost self-supervised AI models’ robustness</title>
		<link>https://www.aiuniverse.xyz/researchers-find-way-to-boost-self-supervised-ai-models-robustness/</link>
					<comments>https://www.aiuniverse.xyz/researchers-find-way-to-boost-self-supervised-ai-models-robustness/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 27 Feb 2020 06:44:15 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7085</guid>

					<description><![CDATA[<p>Source: venturebeat.com In self-supervised learning — an AI technique where the training data is automatically labeled by a feature extractor — the extractor can exploit low-level features (known as <a class="read-more-link" href="https://www.aiuniverse.xyz/researchers-find-way-to-boost-self-supervised-ai-models-robustness/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/researchers-find-way-to-boost-self-supervised-ai-models-robustness/">Researchers find way to boost self-supervised AI models’ robustness</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>In self-supervised learning — an AI technique where the training data is automatically labeled by a feature extractor — the extractor can exploit low-level features (known as “shortcuts”) that cause it to ignore useful representations. In search of a technique that might help remove these shortcuts autonomously, researchers at Google Brain developed a framework — a “lens” — that enabled self-supervised models to outperform those trained in a conventional fashion.</p>



<p>As the researchers explain in a preprint paper published this week, in self-supervised learning, extractor-generated labels are used to create a pretext task that requires learning abstract, semantic features. A model pretrained on the task can then be transferred to tasks for which labels are expensive to obtain, for example by fine-tuning the model for a given target task. But defining pretext tasks is often challenging because models are biased toward exploiting the simplest features, like logos, watermarks, and color fringes caused by camera lenses.</p>



<p> Fortunately, the features that a model can use to solve a pretext task can be used by an adversary to make the pretext task harder.The researchers’ framework — which targets self-supervised computer vision models — processes images with a lightweight image-to-image model called a “lens” that is trained adversarially to reduce pretext task performance. Once trained, the lens can be applied to unseen images so it can be used when transferring the model to a new task. In addition, the lens can help visualize the shortcuts by spotlighting the differences between the input and output images, providing insights into how shortcuts differ. </p>



<p>In experiments, the researchers trained a self-supervised model on an open source data set — CIFAR-10 — and tasked it with predicting the correct orientation of images rotated slightly. To test the lens, they added shortcuts to the input images with directional information that let the model solve the rotation task without having to learn object-level features. The researchers report that representations the model learned (without the lens) from the synthetic shortcuts performed poorly, while feature extractors learned from the lens performed “dramatically” better overall. </p>



<p>In a second test, the team trained a model on over a million images in the open source ImageNet corpus and had it predict the relative location of one or more patches contained within the images. They say that for all tested tasks, adding the lens led to an improvement over the baseline.</p>



<p>“Our results show that the benefit of automatic shortcut removal using an adversarially trained lens generalizes across pretext tasks and across data sets. Furthermore, we find that gains can be observed across a wide range of feature extractor capacities,” wrote the study’s coauthors. “Apart from improved representations, our approach allows us to visualize, quantify, and compare the features learned by self-supervision. We confirm that our approach detects and mitigates shortcuts observed in prior work and also sheds light on issues that were less known.”</p>



<p>In future research, they plan to explore new lens architectures and see whether the technique can be applied to further improve supervised learning algorithms.</p>
<p>The post <a href="https://www.aiuniverse.xyz/researchers-find-way-to-boost-self-supervised-ai-models-robustness/">Researchers find way to boost self-supervised AI models’ robustness</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/researchers-find-way-to-boost-self-supervised-ai-models-robustness/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Fighting the Risks Associated with Transparency of AI Models</title>
		<link>https://www.aiuniverse.xyz/fighting-the-risks-associated-with-transparency-of-ai-models/</link>
					<comments>https://www.aiuniverse.xyz/fighting-the-risks-associated-with-transparency-of-ai-models/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 08 Jan 2020 08:18:31 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Black Box]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data hacking]]></category>
		<category><![CDATA[data security]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6018</guid>

					<description><![CDATA[<p>Source: enterprisetalk.com As firms move towards the adoption of machine learning, Artificial Intelligence (AI) is generating substantial security risks. One of the most significant risks associated with AI remains <a class="read-more-link" href="https://www.aiuniverse.xyz/fighting-the-risks-associated-with-transparency-of-ai-models/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/fighting-the-risks-associated-with-transparency-of-ai-models/">Fighting the Risks Associated with Transparency of AI Models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterprisetalk.com</p>



<p>As firms move towards the adoption of machine learning, Artificial Intelligence (AI) is generating substantial security risks.</p>



<p>One of the most significant risks associated with AI remains the ML-based models operating as “black boxes.” The deep learning models composed of artificial neural networks have complicated the process of deriving automated inferences. These complications increase the risks associated with AI models. ML-based applications may inadvertently get influenced by biases and other adverse factors while producing automated decisions. To mitigate the risks, firms are starting to demand enhanced transparency into how ML operates, focusing on the entire workflow in which models are trained, built, and deployed.</p>



<p>There are many frameworks for maintaining the algorithmic transparency of AI models to ensure explainability, interpretability, and accountability. Business demands flexibility, but IT needs control. This has pushed the need of firms to rely on different frameworks to secure &nbsp;&nbsp;algorithm transparency. All these tools and techniques assist the data scientists in generating explanations to understand which data inputs drove different algorithmic inferences under various circumstances. However,&nbsp;sadly, these frameworks can be easily hacked, thereby reducing trust in the explanations they generate and exposing the risks they create:</p>



<p><strong>Algorithmic deceptions may sneak into the public record</strong>&nbsp;– Dishonest parties may hack the narrative explanations generated by these algorithms to obscure or misrepresent any biases. In other words, “perturbation-based” approaches can be tricked&nbsp;into creating “safe” reasons for algorithmic behaviors that are definitely biased.</p>



<p><strong>Technical vulnerabilities may get disclosed accidentally</strong>&nbsp;– Revealing information&nbsp;about machine learning algorithms can make them highly vulnerable to attacks.&nbsp;Complete transparency into how machine learning models function will expose them to attacks designed either to trick the inferences from live operational data or by injecting bogus data into their training workflows.</p>



<p><strong>Intellectual property theft may be encouraged</strong> – Entire ML algorithms and training data sets can get stolen through their APIs and other features. Transparency regarding how ML models operate may enable the underlying models to be reconstructed with full reliability. Similarly, transparency will also make it possible to partially or entirely reconstruct training data sets, which is an attack known as “model inversion.”</p>



<p>Privacy violations may run rampant.&nbsp;ML transparency may make it possible for unauthorized third parties to ascertain a particular individual’s data record through a “membership inference attack,” to enable hackers to unlock considerable amounts of privacy-sensitive data.</p>



<p>To mitigate such technical risks of algorithmic transparency, enterprise data professionals need to adhere to the below strategies:</p>



<ul class="wp-block-list"><li>Firms should have control access to model outputs and monitor to prevent data abuse.</li><li>Add controlled amounts of “perturbations” into the data used to train transparent&nbsp;ML&nbsp;models to make it difficult for adversarial hackers to use model manipulations to gain insight into the original raw data itself.</li><li>Insert intermediary layers between the final transparent ML&nbsp;models and the raw data, making it difficult for an unauthorized third party to recover the full training data from the explanations generated against final models.</li><li>In addition to these risks of a technical nature, enterprises get exposed to more lawsuits and regulatory scrutiny.</li></ul>



<p>Without sacrificing &nbsp;ML transparency, firms need to have a clear objective of mitigating these broader business risks &nbsp;.</p>



<p>Enterprises will need to monitor these explanations for irregularities continually, to derive evidence that they or the models have been hacked. This is a critical concern because trust in the AI technology will come tumbling down if the enterprises that build and train ML models can’t vouch for the transparency of the models’ official documentation.</p>
<p>The post <a href="https://www.aiuniverse.xyz/fighting-the-risks-associated-with-transparency-of-ai-models/">Fighting the Risks Associated with Transparency of AI Models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/fighting-the-risks-associated-with-transparency-of-ai-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google proposes new metrics for evaluating AI-generated audio and video quality</title>
		<link>https://www.aiuniverse.xyz/google-proposes-new-metrics-for-evaluating-ai-generated-audio-and-video-quality/</link>
					<comments>https://www.aiuniverse.xyz/google-proposes-new-metrics-for-evaluating-ai-generated-audio-and-video-quality/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 23 Oct 2019 09:17:45 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[audio and video]]></category>
		<category><![CDATA[FAD]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4824</guid>

					<description><![CDATA[<p>Source: venturebeat.com What’s the best way to measure the quality of media generated from whole cloth by AI models? It’s not easy. One of the most popular metrics for <a class="read-more-link" href="https://www.aiuniverse.xyz/google-proposes-new-metrics-for-evaluating-ai-generated-audio-and-video-quality/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-proposes-new-metrics-for-evaluating-ai-generated-audio-and-video-quality/">Google proposes new metrics for evaluating AI-generated audio and video quality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>What’s the best way to measure the quality of media generated from whole cloth by AI models? It’s not easy. One of the most popular metrics for images is the Fréchet Inception Distance (FID), which takes photos from both the target distribution and the model being evaluated and uses an AI object recognition system to capture important features and suss out similarities. But although several metrics for synthesized audio and video have been proposed, none has yet been widely adopted.</p>



<p>That’s why researchers hailing from Google are throwing their hats into the ring with what they call the Fréchet Audio Distance (FAD) and Fréchet Video Distance (FVD), which measure the holistic quality of synthesized audio and video, respectively. The researchers claim that unlike peak signal-to-noise ratio, the structural similarity index, or other metrics that have been proposed, FVD looks at look at videos in their entirety. As for AUD, they say it’s reference-free and can be used on any type of audio, in contrast to time-aligned ground truth signals like source-to-distortion ratio (SDR).</p>



<p> “Access to robust metrics for evaluation of generative models is crucial for measuring (and making) progress in the fields of audio and video understanding, but currently no such metrics exist,” wrote software engineers Kevin Kilgour and Thomas Unterthiner in a blog post. “Clearly, some [generated] videos shown below look more realistic than others, but can the differences between them be quantified?” </p>



<p>As it turns out: Yes. In an FAD evaluation, the separation between the distributions of two sets of audio samples — generated and real — is evaluated. As the magnitude of distortions increase, the overlap between the distributions correspondingly decreases, indicating that the synthetic samples are relatively low in quality.</p>



<p>To evaluate how closely FAD and FVD track human judgement, Kilgour, Unterthiner, and colleagues performed a large-scale study involving human evaluators. Here, the evaluators were tasked with examining 10,000 video pairs and 69,000 5-second audio clips. For the FAD, specifically, they were asked to compare the effect of two different distortions on the same audio segment, and both the pair of distortions that they compared and the order in which they appeared were randomized. The collected set of pairwise evaluations was then ranked using a model that estimates a worth value for each parameter configuration.</p>



<p>The team asserts that a comparison of the worth values to the FAD demonstrates that the FAD correlates “quite well” with human judgement.</p>



<p>“We are currently making great strides in generative [AI] models,” said Kilgour and Unterthiner. “FAD and FVD will help us [keep] this progress measurable and will hopefully lead us to improve our models for audio and video generation.” </p>
<p>The post <a href="https://www.aiuniverse.xyz/google-proposes-new-metrics-for-evaluating-ai-generated-audio-and-video-quality/">Google proposes new metrics for evaluating AI-generated audio and video quality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-proposes-new-metrics-for-evaluating-ai-generated-audio-and-video-quality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
