<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google Brain Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/google-brain/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/google-brain/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 19 Mar 2020 06:29:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Google AI open-sources EfficientDet for state-of-the-art object detection</title>
		<link>https://www.aiuniverse.xyz/google-ai-open-sources-efficientdet-for-state-of-the-art-object-detection/</link>
					<comments>https://www.aiuniverse.xyz/google-ai-open-sources-efficientdet-for-state-of-the-art-object-detection/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 19 Mar 2020 06:29:03 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI tools]]></category>
		<category><![CDATA[EfficientDet]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google Brain]]></category>
		<category><![CDATA[Google Cloud]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7551</guid>

					<description><![CDATA[<p>Source: venturebeat.com Members of the Google Brain team and Google AI this week open-sourced EfficientDet, an AI tool that achieves state-of-the-art object detection while using less compute. Creators of the system say it also achieves faster performance when used with CPUs or GPUs than other popular objection detection models like YOLO or AmoebaNet. When tasked with semantic segmentation, another <a class="read-more-link" href="https://www.aiuniverse.xyz/google-ai-open-sources-efficientdet-for-state-of-the-art-object-detection/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-open-sources-efficientdet-for-state-of-the-art-object-detection/">Google AI open-sources EfficientDet for state-of-the-art object detection</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Members of the Google Brain team and Google AI this week open-sourced EfficientDet, an AI tool that achieves state-of-the-art object detection while using less compute. Creators of the system say it also achieves faster performance when used with CPUs or GPUs than other popular objection detection models like YOLO or AmoebaNet.</p>



<p>When tasked with semantic segmentation, another task related to object detection, EfficientDet also achieves exceptional performance. Semantic segmentation experiments were conducted with the PASCAL visual object challenge data set.</p>



<p>EfficientDet is the next-generation version of EfficientNet, a family of advanced object detection models made available last year for Coral boards. Google engineers Mingxing Tan, Google Ruoming Pang, and Quoc Le detailed EfficientDet in a paper first published last fall, but revised and updated it on Sunday to include code.</p>



<p>“Aiming at optimizing both accuracy and efficiency, we would like to develop a family of models that can meet a wide spectrum of resource constraints,” the paper, which examines neural network architecture design for object detection, reads.</p>



<p>Authors say existing methods of scaling object detection often sacrifice accuracy or can be resource intensive. EfficientDet achieves its less expensive and resource-hungry way to deploy object detection on the edge or in the cloud with a method that “uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time.”</p>



<p>“The large model sizes and expensive computation costs deter their deployment in many real-world applications such as robotics and self-driving cars where model size and latency are highly constrained,” the paper reads. “Given these real-world resource constraints, model efficiency becomes increasingly important for object detection.”</p>



<p>Optimizations for EfficientDet takes inspiration from Tan and Le’s original work on EfficientNet. and proposes joint compound scaling for backbone and feature networks. In EfficientDet, a bidirectional feature pyramid network (BiFPN) acts as a feature network, and an ImageNet pretrained EfficientNet acts as the backbone network.</p>



<p>EfficientDet optimizes for cross-scale connections in part by removing nodes that only have one input edge to create a simpler bidirectional network. It also relies on the one-stage detector paradigm, an object detector known for efficiency and simplicity.</p>



<p>“We propose to add an additional weight for each input during feature fusion, and let the network to learn the importance of each input feature,” the paper reads.</p>



<p>This is the latest object detection news from Google, whose Google Cloud Vision system for object detection recently removed male and female label options for its publicly available API.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-open-sources-efficientdet-for-state-of-the-art-object-detection/">Google AI open-sources EfficientDet for state-of-the-art object detection</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-ai-open-sources-efficientdet-for-state-of-the-art-object-detection/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Brain and DeepMind researchers attack reinforcement learning efficiency</title>
		<link>https://www.aiuniverse.xyz/google-brain-and-deepmind-researchers-attack-reinforcement-learning-efficiency/</link>
					<comments>https://www.aiuniverse.xyz/google-brain-and-deepmind-researchers-attack-reinforcement-learning-efficiency/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 19 Feb 2020 05:50:58 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[DeepMind]]></category>
		<category><![CDATA[Google Brain]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6878</guid>

					<description><![CDATA[<p>Source: venturebeat.com Reinforcement learning, which spurs AI to complete goals using rewards or punishments, is a form of training that’s led to gains in robotics, speech synthesis, and more. Unfortunately, it’s data-intensive, which motivated research teams — one from Google Brain (one of Google’s AI research divisions) and the other from Alphabet’s DeepMind — to prototype more <a class="read-more-link" href="https://www.aiuniverse.xyz/google-brain-and-deepmind-researchers-attack-reinforcement-learning-efficiency/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-brain-and-deepmind-researchers-attack-reinforcement-learning-efficiency/">Google Brain and DeepMind researchers attack reinforcement learning efficiency</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Reinforcement learning, which spurs AI to complete goals using rewards or punishments, is a form of training that’s led to gains in robotics, speech synthesis, and more. Unfortunately, it’s data-intensive, which motivated research teams — one from Google Brain (one of Google’s AI research divisions) and the other from Alphabet’s DeepMind — to prototype more efficient means of executing it. In a pair of preprint papers, the researchers propose Adaptive Behavior Policy Sharing (ABPS), an algorithm that allows the sharing of experience adaptively selected from a pool of AI agents, and a framework — Universal Value Function Approximators (UVFA) — that simultaneously learns directed exploration policies with the same AI, with different trade-offs between exploration and exploitation.</p>



<p>The teams claim ABPS achieves superior performance in several Atari games, reducing variance on top agents by 25%. As for UVFA, it doubles the performance of base agents in “hard exploration” in many of the same games while maintaining a high score across the remaining games; it’s the first algorithm to achieve a high score in Pitfall without human demonstrations or hand-crafted features.</p>



<h2 class="wp-block-heading">ABPS</h2>



<p>As the researchers explain, reinforcement learning faces practical constraints in real-world applications because it’s often expensive and time-consuming to perform, computationally speaking. The tuning of hyperparameters — parameters whose values are set before the learning process begins — are the key to optimizing algorithms in reinforcement learning, but they require data collection through interactions with the environment.</p>



<p>ABPS aims to expedite this by allowing experience sharing from a behavior policy (i.e., a state-action mapping, where a “state” represents the state of the world and an “action” refers to which action should be taken) selected from several agents trained with different hyperparameters. Specifically, it incorporates a reinforcement learning agent that selects an action from a legal set according to a policy, after which it receives a reward and an observation that’s determined by the next state.</p>



<p>Training the aforementioned agent involves generating a set of hyperparameters, where a pool of AI architectures and optimization hyperparameters such as learning rate, decay period, and more are selected. The goal is to find the best set such that the agent trained with that set achieves the best evaluation results, while at the same time improving data efficiency in hyperparameter tuning by training agents simultaneously and selecting only one behavior agent to be deployed as at each step.</p>



<p>The policy of the selected agent is used to sample actions and the transitions are stored in a shared space, which is constantly evaluated to reduce the frequency of policy selection. An ensemble of agents is obtained at the end of training, and from it, one or more top-performing agents are chosen to be deployed for serving. Instead of examining the behavior policy reward collected during training, a separate online evaluation for 50 episodes is run for every agent at each training epoch, so that the online evaluation reward reflects the performance of the agent in the pool.</p>



<p>In an experiment, the team trained an ensemble of four agents, with each using one of the candidate architectures on Pong and Breakout, and an ensemble of eight agents with six using variations of small architectures on Boxing. They report that ABPS methods achieved better performance on all three games and that random policy selection resulted in the same level of performance, even with the same number of environment actions as a single agent.</p>



<h2 class="wp-block-heading">UVFA</h2>



<p>Exploration remains one of the major challenges in reinforcement learning, in part because agents fed weak rewards sometimes fail to learn tasks. UVFA doesn’t solve this outright, but it attempts to address it by jointly learning separate exploration and exploitation policies derived from the same AI, in such a way that the exploitative policy can concentrate on maximizing the extrinsic reward (solving the task at hand) while the exploratory ones keep exploring.</p>



<p>As the researchers explain, UVFA’s learning of exploratory policies serves to build a shared architecture that continues to develop even in the absence of intrinsic, or natural, rewards. Reinforcement learning helps to approximate an optimal function corresponding to several intrinsic rewards, encouraging agents to visit all states in an environment while periodically revisiting familiar (but potentially not fully explored) states over several episodes.</p>



<p>It’s achieved with two modules: an episodic novelty module and an optional life-long novelty module. The episodic novelty module contains episodic memory and an embedding function that maps the current observation to a learned representation, such that at every step, the agent computes an episodic intrinsic reward and appends the state corresponding to the current observation to memory. As for the life-long novelty module, it provides a signal to control the amount of exploration across multiple episodes.</p>



<p>Concretely, the intrinsic reward is fed directly as an input to the agent, and the agent maintains an internal state representation that summarizes its history of all inputs — state, action, and rewards — within an episode. Importantly, the reward doesn’t vanish over time, ensuring that the learned policy is always partially driven by it.</p>



<p>In experiments, the team reports that the proposed agent achieved high scores in all Atari “hard-exploration” games, including Pitfall, while still maintaining a high average score over a suite of benchmark games. By leveraging large amounts of compute over the course of days running on distributed training architectures that collect experience from actors in parallel on separate environments, they say UVFA enables agents to exhibit “remarkable” performance.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-brain-and-deepmind-researchers-attack-reinforcement-learning-efficiency/">Google Brain and DeepMind researchers attack reinforcement learning efficiency</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-brain-and-deepmind-researchers-attack-reinforcement-learning-efficiency/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Brain’s AI achieves state-of-the-art text summarization performance</title>
		<link>https://www.aiuniverse.xyz/google-brains-ai-achieves-state-of-the-art-text-summarization-performance/</link>
					<comments>https://www.aiuniverse.xyz/google-brains-ai-achieves-state-of-the-art-text-summarization-performance/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 24 Dec 2019 07:42:08 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI achieves]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[automatic summarization]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google Brain]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5794</guid>

					<description><![CDATA[<p>Source: venturebeat.com Summarizing text is a task at which machine learning algorithms are improving, as evidenced by a recent paper published by Microsoft. That’s good news — automatic summarization systems promise to cut down on the amount of message-reading enterprise workers do, which one survey estimates amounts to 2.6 hours each day. Not to be outdone, a Google <a class="read-more-link" href="https://www.aiuniverse.xyz/google-brains-ai-achieves-state-of-the-art-text-summarization-performance/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-brains-ai-achieves-state-of-the-art-text-summarization-performance/">Google Brain’s AI achieves state-of-the-art text summarization performance</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Summarizing text is a task at which machine learning algorithms are improving, as evidenced by a recent paper published by Microsoft. That’s good news — automatic summarization systems promise to cut down on the amount of message-reading enterprise workers do, which one survey estimates amounts to 2.6 hours each day.</p>



<p>Not to be outdone, a Google Brain and Imperial College London team built a system — Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence, or Pegasus — that leverages Google’s Transformers architecture combined with pretraining objectives tailored for abstractive text generation. They say it achieves state-of-the-art results in 12 summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills, and that it shows “surprising” performance on low-resource summarization, surpassing previous top results on six data sets with only 1,000 examples.</p>



<p>As the researchers point out, text summarization aims to generate accurate and concise summaries from input documents, in contrast to executive techniques. Rather than merely copy fragments from the input, abstractive summarization might produce novel words or cover principal information such that the output remains linguistically fluent.</p>



<p>Transformers are a type of neural architecture introduced in a paper by researchers at Google Brain, Google’s AI research division. As do all deep neural networks, they contain functions (neurons) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection — that’s how all AI models extract features and learn to make predictions. But Transformers uniquely have attention. Every output element is connected to every input element, and the weightings between them are calculated dynamically.</p>



<p>The team devised a training task in which whole, and putatively important, sentences within documents were masked. The AI had to fill in the gaps by drawing on web and news articles, including those contained within a new corpus (HugeNews) the researchers compiled.</p>



<p>In experiments, the team selected their best-performing Pegasus model — one with 568 million parameters, or variables learned from historical data — trained on either 750GB of text extracted from 350 million web pages (Common Crawl) or on HugeNews, which spans 1.5 billion articles totaling 3.8TB collected from news and news-like websites. (The researchers say that in the case of HugeNews, a whitelist of domains ranging from high-quality news publishers to lower-quality sites was used to seed a web-crawling tool.)</p>



<p>Pegasus achieved high linguistic quality in terms of fluency and coherence, according to the researchers, and it didn’t require countermeasures to mitigate disfluencies. Moreover, in a low-resource setting with just 100 example articles, it generated summaries at a quality comparable to a model that had been trained on a full data set ranging from 20,000 to 200,000 articles.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-brains-ai-achieves-state-of-the-art-text-summarization-performance/">Google Brain’s AI achieves state-of-the-art text summarization performance</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-brains-ai-achieves-state-of-the-art-text-summarization-performance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Amazing Ways Google Uses Deep Learning AI</title>
		<link>https://www.aiuniverse.xyz/the-amazing-ways-google-uses-deep-learning-ai/</link>
					<comments>https://www.aiuniverse.xyz/the-amazing-ways-google-uses-deep-learning-ai/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 09 Aug 2017 10:29:43 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI technologies]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google Brain]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=526</guid>

					<description><![CDATA[<p>Source &#8211; forbes.com Deep learning is the area of artificial intelligence where the real magic is happening right now. Traditionally computers, while being very fast, have not been very smart – they have no ability to learn from their mistakes and have to be given precise instructions in order to carry out any task. Deep learning involves building <a class="read-more-link" href="https://www.aiuniverse.xyz/the-amazing-ways-google-uses-deep-learning-ai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-amazing-ways-google-uses-deep-learning-ai/">The Amazing Ways Google Uses Deep Learning AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> forbes.com</strong></p>
<p>Deep learning is the area of artificial intelligence where the real magic is happening right now. Traditionally computers, while being very fast, have not been very smart – they have no ability to learn from their mistakes and have to be given precise instructions in order to carry out any task.</p>
<p>Deep learning involves building artificial neural networks which attempt to mimic the way organic(living) brains sort and process information. The “deep” in deep learning signifies the use of many layers of neural networks all stacked on top of each other. This data processing configuration is known as a deep neural network, and its complexity means it is able to process data to a more thorough and refined degree than other AI technologies which have come before it.</p>
<p>Deep learning is already driving innovation at the cutting edge of artificial intelligence and it can be seen in many applications today. However, as data volumes continue to increase and processing technology becomes more affordable, many more sectors of society are likely to be impacted. Here’s a look at how one of the pioneers &#8211; Google &#8211; is already using it across many of its products and services.</p>
<div id="attachment_466915567" class="wp-caption alignnone"><img decoding="async" class="dam-image getty wp-image-466915567 size-large" src="https://specials-images.forbesimg.com/imageserve/466915567/960x0.jpg?fit=scale" alt="The Amazing Ways How Google Uses Deep Learning AI" data-height="651" data-width="960" /></div>
<p><strong>Why is Google interested in deep learning?</strong></p>
<p>Google has been a powerful force in championing the use of deep learning – a technology now so prevalent in cutting edge applications that its name is pretty much synonymous with artificial intelligence. There’s a simple reason for this – it works. Putting deep learning to work has enabled data scientists to crack a number of difficult cases which had proved challenging for decades, such as speech and image recognition, and natural language generation.</p>
<p>It’s first publicly-discussed explorations of the possibilities of deep learning began with the Google Brain project in 2011. The following year, Google announced that it had built a neural network, designed to simulate human cognitive processes, running on 16,000 computers and which was capable, after studying around 10 million images, of identifying cats.</p>
<p>In 2014, Google acquired UK based deep learning startup Deep Mind.</p>
<p>Deep Mind pioneered work in connecting existing machine learning techniques to cutting edge research in neuroscience, leading to systems which more accurately resembled “real” intelligence (I.e brains). Deep Mind was responsible for the creation of Alpha Go, which used video games, and later the boardgame Go, to demonstrate the ability of their algorithm to learn how to carry out a task and become increasingly good at it.</p>
<p><strong>What does Google use deep learning for across its mail services?</strong></p>
<p>While proving the concept in laboratories and games contests, it was also quietly rolled out across many of Google’s services.</p>
<p>It’s first practical use was in image recognition, where it was put to work sorting through the millions of images uploaded to the parts of the internet which Google indexes. It does this in order to more accurately classify them, and in turn give users more accurate search results. Google’s latest breakthrough involving deep learning in the field of image analytics is in image enhancement. This involves restoring or filling in detail missing from images, by extrapolating for data that is present, as well as using what it knows about other similar images.</p>
<p>Another platform, Google Cloud Video Intelligence focuses on opening up video analytics to new audiences. Video stored on Google’s servers can be segmented and analyzedfor content and context, allowing automated summaries to be generated, or even security alerts if the AI thinks something suspicious is going on.</p>
<p>Language processing is another area of their services where the tech has been implemented. It’s Google Assistant speech recognition AI uses deep neural networks to learn how to better understand spoken commands and questions. Techniques developed by Google Brain were rolled into this project. More recently, Google’s translation service was also put under the umbrella of Google Brain. The system was rewritten to run on a new platform called Google Neural Machine Translation, moving everything to a deep learning environment.</p>
<p>The third primary way Google uses deep learning today on its core services is to provide more useful recommendations on Youtube. Again, Google Brain is behind the technology used here, which monitors and records our viewing habits as we stream content from their servers. Data already showed that suggesting videos that viewers will want to watch next is key to keeping them hooked to the platform, and the ad bucks rolling in. Deep neural networks were put to work studying and learning everything they could about viewers’ habits and preferences, and working out what would keep them glued to their screens.</p>
<p><strong>What else does Google use deep learning for?</strong></p>
<p>Of course, given the success they have had with it, it is inevitable that Google would be keen to implement this technology in its more ambitious, specialist or future-oriented projects.</p>
<p>In 2015, it open sourced its TensorFlow machine learning and deep learning-focused programming platform, to allow anyone to develop neural network-based solutions using the same technology they use themselves.</p>
<p>Through its Cloud Machine Learning Engine, it also offers storage and processing power to third parties which want to put the technology to use without investing upfront in hugely powerful computer infrastructure.</p>
<p>Google’s self-driving car division, Waymo, has incorporated deep learning algorithms into their autonomous systems, in order to make self-driving cars more efficient at analyzing and reacting to what is going on around them.</p>
<p>And Deep Mind is currently working on healthcare-focused projects involving detecting early signs of eye damage, and cancerous tissue growth.</p>
<p><strong>What’s next?</strong></p>
<p>Google has been an effective force in pioneering, championing and bringing deep learning to the masses. Thanks to their research and investment anyone can benefit from these technologies. And increasingly, we will be able to put them to work ourselves on our own data. A great deal of people are pinning their hopes on deep learning providing some great leaps forward in coming years, in every field from medicine to exploring space – and the groundwork done by Google will play a big part in that.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-amazing-ways-google-uses-deep-learning-ai/">The Amazing Ways Google Uses Deep Learning AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-amazing-ways-google-uses-deep-learning-ai/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
