<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>GitHub Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/github/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/github/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 11 Apr 2024 12:52:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>How to generate Personal access tokens in github?</title>
		<link>https://www.aiuniverse.xyz/how-to-generate-personal-access-tokens-in-github/</link>
					<comments>https://www.aiuniverse.xyz/how-to-generate-personal-access-tokens-in-github/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Thu, 11 Apr 2024 12:52:24 +0000</pubDate>
				<category><![CDATA[Git & GitHub]]></category>
		<category><![CDATA[Access Control]]></category>
		<category><![CDATA[API Access]]></category>
		<category><![CDATA[Authentication Developer Settings]]></category>
		<category><![CDATA[Authorization]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[How to generate Personal access tokens in github?]]></category>
		<category><![CDATA[Personal Access Token]]></category>
		<category><![CDATA[Scopes]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Token Generation]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=18739</guid>

					<description><![CDATA[<p>To generate a personal access token in GitHub, follow these steps: Step1: Sign in to GitHub: Go to github.com and sign in to your GitHub account. Step2: Access Settings: Click on your profile picture in the top-right corner of the page, then click on &#8220;Settings&#8221; from the dropdown menu. Step3: Select Developer settings: In the <a class="read-more-link" href="https://www.aiuniverse.xyz/how-to-generate-personal-access-tokens-in-github/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-generate-personal-access-tokens-in-github/">How to generate Personal access tokens in github?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="380" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-8-1024x380.png" alt="" class="wp-image-18745" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-8-1024x380.png 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-8-300x111.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-8-768x285.png 768w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-8.png 1125w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>To generate a personal access token in GitHub, follow these steps:</p>



<p><strong><em>Step1:</em> Sign in to GitHub:</strong> Go to github.com and sign in to your GitHub account.</p>



<figure class="wp-block-image size-large is-resized"><img decoding="async" width="1024" height="211" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-3-1024x211.png" alt="" class="wp-image-18740" style="width:841px;height:auto" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-3-1024x211.png 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-3-300x62.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-3-768x158.png 768w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-3.png 1338w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong><em>Step2: </em>Access Settings:</strong> Click on your profile picture in the top-right corner of the page, then click on &#8220;Settings&#8221; from the dropdown menu.</p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="675" height="654" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-4.png" alt="" class="wp-image-18741" style="width:837px;height:auto" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-4.png 675w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-4-300x291.png 300w" sizes="(max-width: 675px) 100vw, 675px" /></figure>



<p><strong><em>Step3:</em> Select Developer settings:</strong> In the left sidebar, click on &#8220;Developer settings&#8221;.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="966" height="449" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-5.png" alt="" class="wp-image-18742" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-5.png 966w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-5-300x139.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-5-768x357.png 768w" sizes="auto, (max-width: 966px) 100vw, 966px" /></figure>



<p><strong><em>Step4: </em>Choose Personal access tokens: </strong>In the Developer settings menu, click on &#8220;Personal access tokens&#8221;.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="659" height="385" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-6.png" alt="" class="wp-image-18743" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-6.png 659w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-6-300x175.png 300w" sizes="auto, (max-width: 659px) 100vw, 659px" /></figure>



<p><strong><em>Step5: </em>Generate a new token: </strong>Click on the &#8220;Generate new token&#8221; button. You may be prompted to enter your password for verification.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="327" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-7-1024x327.png" alt="" class="wp-image-18744" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-7-1024x327.png 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-7-300x96.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-7-768x245.png 768w, https://www.aiuniverse.xyz/wp-content/uploads/2024/04/image-7.png 1238w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong><em>Step6: </em>Configure your token:</strong> Give your token a descriptive name that will help you remember its purpose. Then, select the desired scopes for the token. Scopes define what actions the token can perform, such as accessing repositories, creating gists, or managing notifications.</p>



<p><strong><em>Step7:</em> Generate the token:</strong> Once you&#8217;ve configured the token, click on the &#8220;<strong>Generate token</strong>&#8221; button at the bottom of the page.</p>



<p><strong><em>Step8: </em>Copy your token:</strong> GitHub will generate a personal access token for you. Make sure to copy this token immediately as it will not be displayed again.</p>



<p><strong><em>Step9: </em>Store your token securely:</strong> Treat your token like a password and keep it secure. Do not share it publicly or commit it to version control repositories.</p>



<p>That&#8217;s it! You&#8217;ve successfully generated a personal access token in GitHub. You can now use this token to authenticate your requests when accessing GitHub APIs or performing actions programmatically.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-generate-personal-access-tokens-in-github/">How to generate Personal access tokens in github?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-to-generate-personal-access-tokens-in-github/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Difference between Git and GitHub ?</title>
		<link>https://www.aiuniverse.xyz/difference-between-git-and-github/</link>
					<comments>https://www.aiuniverse.xyz/difference-between-git-and-github/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Mon, 26 Jun 2023 11:22:54 +0000</pubDate>
				<category><![CDATA[Git & GitHub]]></category>
		<category><![CDATA[Difference between Git and GitHub ?]]></category>
		<category><![CDATA[Git]]></category>
		<category><![CDATA[GitHub]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=17330</guid>

					<description><![CDATA[<p>Git and GitHub are related but distinct tools used in software development for version control and collaboration. Here&#8217;s a breakdown of their differences: Git Git is a distributed version control system designed to track changes in source code during software development. It is a command-line tool that allows developers to create, manage, and merge branches, <a class="read-more-link" href="https://www.aiuniverse.xyz/difference-between-git-and-github/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/difference-between-git-and-github/">Difference between Git and GitHub ?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Git and GitHub are related but distinct tools used in software development for version control and collaboration. Here&#8217;s a breakdown of their differences:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="700" height="352" src="https://www.aiuniverse.xyz/wp-content/uploads/2023/06/image-15.png" alt="" class="wp-image-17332" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2023/06/image-15.png 700w, https://www.aiuniverse.xyz/wp-content/uploads/2023/06/image-15-300x151.png 300w" sizes="auto, (max-width: 700px) 100vw, 700px" /></figure>



<h2 class="wp-block-heading">Git</h2>



<p>Git is a distributed version control system designed to track changes in source code during software development. It is a command-line tool that allows developers to create, manage, and merge branches, commit changes, and collaborate with others. Git enables developers to work offline and independently on their local repositories. It provides mechanisms for branching and merging code, resolving conflicts, and reverting changes when necessary. Git operates locally on a developer&#8217;s machine and doesn&#8217;t require a centralized server.</p>



<h2 class="wp-block-heading">GitHub</h2>



<p>GitHub, on the other hand, is a web-based platform that hosts Git repositories in a centralized manner. It adds a layer of functionality and collaboration features on top of Git. Developers can create remote repositories on GitHub and push their local repositories to these remote repositories. GitHub provides a graphical user interface (GUI) for performing various Git operations like creating pull requests, managing issues, reviewing code changes, and collaborating with other developers through features such as forks and pull requests. It also offers additional tools like project boards, wikis, and actions for continuous integration and deployment.</p>
<p>The post <a href="https://www.aiuniverse.xyz/difference-between-git-and-github/">Difference between Git and GitHub ?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/difference-between-git-and-github/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Restores Time-Ravaged Photos</title>
		<link>https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 05 Oct 2020 09:29:52 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Photos]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Restores]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11933</guid>

					<description><![CDATA[<p>Source: i-programmer.info Researchers have devised a novel deep learning approach to repairing the damage suffered by old photographic prints. The project is open source and a PyTorch implementation is downloadable from GitHub. There&#8217;s also a Colab where you can try it out. We&#8217;ve encountered neural networks that can colorize old black and white shots, can <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/">Deep Learning Restores Time-Ravaged Photos</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: i-programmer.info</p>



<p>Researchers have devised a novel deep learning approach to repairing the damage suffered by old photographic prints. The project is open source and a PyTorch implementation is downloadable from GitHub. There&#8217;s also a Colab where you can try it out.</p>



<p>We&#8217;ve encountered neural networks that can colorize old black and white shots, can improve on photographs of landscapes and even paint portraits in the style of an old master. Here the goal is more modest &#8211; to apply a deep learning approach to restoring old photos that have suffered severe degradation.</p>



<p>The researchers, from Microsoft Research Asia in Beijing, China and at the University of Science and Technology of China, and now the City University of Hong Kong start from the premise that:</p>



<p>Photos are taken to freeze the happy moments that otherwise gone. Even though time goes by, one can still evoke memories of the past by viewing them. Nonetheless, old photo prints deteriorate when kept in poor environmental condition, which causes the valuable photo content permanently damaged.</p>



<p>As manual retouching of prints is laborious and time-consuming they set out to design automatic algorithms that can instantly repair old photos for those who wish to bring them back to life.&nbsp;</p>



<p>The researchers presented their work as an oral presentation at CVPR 2020, held virtually in June  and their paper, &#8220;Bringing Old Photos Back to Life&#8221;, which is part of the conference proceedings is already available.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/">Deep Learning Restores Time-Ravaged Photos</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Could Transform Ophthalmology</title>
		<link>https://www.aiuniverse.xyz/deep-learning-could-transform-ophthalmology/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-could-transform-ophthalmology/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 12 Aug 2020 06:19:46 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[could]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[ophthalmology]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10815</guid>

					<description><![CDATA[<p>Source: hcplive.com Ophthalmology could be the next specialty to look into utilizing new deep learning technology to screen and diagnose patients with ocular disorders. A team, led by Nihaal Mehta, MD, New England Eye Center, Tufts Medical Center, determined whether a model-to-data deep learning approach without needing to transfer any data can be applied in ophthalmology. <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-could-transform-ophthalmology/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-could-transform-ophthalmology/">Deep Learning Could Transform Ophthalmology</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: hcplive.com</p>



<p>Ophthalmology could be the next specialty to look into utilizing new deep learning technology to screen and diagnose patients with ocular disorders.</p>



<p>A team, led by Nihaal Mehta, MD, New England Eye Center, Tufts Medical Center, determined whether a model-to-data deep learning approach without needing to transfer any data can be applied in ophthalmology.</p>



<p>In the single-center cross-sectional study, the investigators examined patients with active exudative age-related macular degeneration (AMD) who underwent optical coherence tomography (OCT) at the New England Eye Center between August 2018 and February 2019.</p>



<p>The investigators sought main outcomes of the training of the deep learning model, using a model-to-data approach, and recognizing intraretinal fluid on OCT B-scans.</p>



<p>The model-to-data approach was taken by freezing the model parameters from a prior study where a deep learning model was trained to segment IRF on Heidelberg Spectralis OCT B-scans.</p>



<p>The model parameters, retraining code, data preprocessing, and code for evaluation were packaged from the University of Washington and transferred using GitHub.</p>



<p>The model was training with a learning curve Dice coefficient greater than 80% using 400 OCT B-scans from 128 patients, 69 of which were female. The mean age of the patient population was 77.5 years old.</p>



<p>The scan protocol consisted of 512 A-scans per B-scan and 128 B-scans per volume, while the spectral-domain OCT system has an 840 nm central wavelength, as well as 68 000 A-scans per second, an A-scan depth of 2.0 mm, an axial resolution of 5 μm, and a transverse resolution of 15 μm.</p>



<p>The investigators compared the model with manual human grading of IRF pockets and found no statistically significant difference in Dice coefficients or intersection over union scores (<em>P&nbsp;</em>&gt; 0.05).</p>



<p>“A model-to-data approach to deep learning was demonstrated for the first time, to our knowledge, in ophthalmology,” the authors wrote. “Using this approach, the performance of the deep learning model was trained and showed no statistically significant difference in quantifying the intraretinal fluid pockets in OCT compared with human manual grading. Such a paradigm has the potential to more easily facilitate large-scale and multicenter deep learning studies.”</p>



<p>While more deep learning tools are being used in virtually every medical specialty, there remains concerns regarding data privacy, security, and sharing. However, by using a model-to-data approach, the model itself can be transferred rather than the data, circumventing many of the existing challenges.</p>



<p>This technique has been tried in other specialties, but has not yet been attempted in ophthalmology. However, this technology could be transformative in the space due to ophthalmology’s dependence on outpatient ancillary testing.</p>



<p>Machine learning and deep learning have already been applied in ophthalmology in a variety of contexts and to a range of clinical conditions, ranging from diabetic retinopathy, age-related macular degeneration,9 and glaucomato, Stargardt disease, and post–small incision lenticule extraction surgical outcomes.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-could-transform-ophthalmology/">Deep Learning Could Transform Ophthalmology</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-could-transform-ophthalmology/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Altran Improves Software Quality With Machine Learning</title>
		<link>https://www.aiuniverse.xyz/altran-improves-software-quality-with-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/altran-improves-software-quality-with-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 22 May 2020 08:35:25 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Atran]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[development times]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8964</guid>

					<description><![CDATA[<p>Source: eletimes.com Altran, the global leader in engineering and R&#38;D services, announced the release of a new tool available on&#160;r&#160;that predicts the likelihood of bugs in source code created by developers early in the software development process. By applying machine learning (ML) to historical data, the tool – called “Code Defect AI” – identifies areas <a class="read-more-link" href="https://www.aiuniverse.xyz/altran-improves-software-quality-with-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/altran-improves-software-quality-with-machine-learning/">Altran Improves Software Quality With Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: eletimes.com</p>



<p>Altran, the global leader in engineering and R&amp;D services, announced the release of a new tool available on&nbsp;r&nbsp;that predicts the likelihood of bugs in source code created by developers early in the software development process. By applying machine learning (ML) to historical data, the tool – called “Code Defect AI” – identifies areas of the code that are potentially buggy and then suggests a set of tests to diagnose and fix the flaws, resulting in higher-quality software and faster development times.</p>



<p>Bugs are a fact of life in software development. The later a defect is found in the development lifecycle, the higher the cost of fixing a bug. This bug-deployment-analysis-fix process is time consuming and costly.&nbsp;Code Defect AI&nbsp;allows earlier discovery of defects, minimizing the cost of fixing them and speeding the development cycle.</p>



<p>“It’s well known that software developers are under constant pressure to release code fast without compromising on quality,” said&nbsp;Walid Negm, Group Chief Innovation Officer at Altran. “The reality however is that the software release cycle needs more than automation of assembly and delivery activities. It needs algorithms that can help make strategic judgments ‒ especially as code gets more complex. Code Defect AI does exactly that.”</p>



<p>Code Defect AI relies on various ML techniques including random decision forests, support vector machines, multilayer perceptron (MLP) and logistic regression. Historical data is extracted, pre-processed and labelled to train the algorithm and curate a reliable decision model. Developers are given a confidence score that predicts whether the code is compliant or presents the risk of containing bugs.</p>



<p>Code Defect AI supports integration with third-party analysis tools and can itself help identify bugs in a given program code. Additionally, the Code Defect AI tool allows developers to assess which features in the code have higher weightage in terms of bug prediction, i.e., if there are two features in the software that play a role in the assessment of a probable bug, which feature will take precedence.</p>



<p>“Microsoft and Altran have been working together to improve the software development cycle, and Code Defect AI, powered by Microsoft Azure, is an innovative tool that can help software developers through the use of machine learning,” said&nbsp;<strong>David Carmona, General Manager of AI Marketing at Microsoft</strong>.</p>



<p>Code Defect AI is a scalable solution that can be hosted on premise as well as on cloud computing platforms such as Microsoft Azure. While the solution currently supports GitHub, which is owned by Microsoft, it can be integrated with other source-code management tools as needed.</p>



<p>The tool is also available on the&nbsp;Microsoft AI Lab portal&nbsp;so that Microsoft developers can download the solution and use it internally.</p>
<p>The post <a href="https://www.aiuniverse.xyz/altran-improves-software-quality-with-machine-learning/">Altran Improves Software Quality With Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/altran-improves-software-quality-with-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning tool trains on old code to spot bugs in new code</title>
		<link>https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 May 2020 07:12:16 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[bugs]]></category>
		<category><![CDATA[Code]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Tools]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8906</guid>

					<description><![CDATA[<p>Source: techrepublic.com Altran has released a new tool that uses artificial intelligence (AI) to help software engineers spot bugs during the coding process instead of at the end. Available on GitHub, Code Defect AI uses machine learning (ML) to analyze existing code, spot potential problems in new code, and suggest tests to diagnose and fix the errors. Walid Negm, group chief <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/">Machine learning tool trains on old code to spot bugs in new code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techrepublic.com</p>



<p>Altran has released a new tool that uses artificial intelligence (AI) to help software engineers spot bugs during the coding process instead of at the end.</p>



<p>Available on GitHub, Code Defect AI uses machine learning (ML) to analyze existing code, spot potential problems in new code, and suggest tests to diagnose and fix the errors.</p>



<p>Walid Negm, group chief innovation officer at Altran, said that this new tool will help developers release quality code quickly.</p>



<p>&#8220;The software release cycle needs algorithms that can help make strategic judgments, especially as code gets more complex,&#8221; he said in a press release.</p>



<p>Code Defect AI uses several ML techniques including random decision forests, support vector machines, multilayer perceptron (MLP) and logistic regression. The platform extracts, processes and labels historical data to train the algorithm and build a reliable decision model. Developers can use a confidence score from Code Defect AI that predicts whether the code is compliant or buggy.</p>



<p>Here is how Code Defect AI works:</p>



<ol class="wp-block-list"><li>For an open source GitHub project, historical data is collected using RESTFul interfaces and Git CLI. This data includes complete commit history and complete bugs history.</li><li>Preprocessing techniques like feature identification, label encoding, one hot encoding, data scaling and normalization are applied to the collected historical commit data.</li><li>Labelling is performed on the preprocessed data. The labelling process involves understanding of the pattern in which the fix commits (where a bug has been closed) are tagged for each of the closed issues. After the fix commits are collected, the commits which introduced the bugs are identified by backtracking on historical changes for each file in a fix commit.</li><li>If a data set contains a very small amount of bug data as compared with clean records, synthetic data is also generated to avoid bias toward the majority class.</li><li>Multiple modelling algorithms are trained on the data prepared.</li><li>Once there is a model that has acceptable value of precision and recall, the selected model is deployed for prediction on new commits.</li></ol>



<p>Code Defect AI supports integration with third-party analysis tools and can help identify bugs in a given program code. Also, the Code Defect AI tool allows developers to assess which features in the code should take higher priority in terms of bug fixes.</p>



<p>&#8220;Microsoft and Altran have been working together to improve the software development cycle, and Code Defect AI, powered by Microsoft Azure, is an innovative tool that can help software developers through the use of machine learning,&#8221; said David Carmona, general manager of AI marketing at Microsoft, in a press release.</p>



<p>Code Defect AI can be hosted on premises as well as on cloud computing platforms such as Microsoft Azure. The solution can be integrated with other source-code management tools as needed.</p>



<h3 class="wp-block-heading">AI employee joins the dev team</h3>



<p>In a new report about artificial intelligence and software development, Deloitte predicts that more and more companies will use AI-assisted coding tools. From January 2018 to September 2019, software vendors launched dozens of AI-powered software development tools and startups working in this space, and raised $704 million over a similar timeframe.</p>



<p>The biggest benefit from these platforms is efficiency, according to Deloitte analysts David Schatsky and Sourabh Bumb, the authors of &#8220;AI is helping to make better software:&#8221;<br>&#8220;The benefits of AI-assisted coding are numerous. However, the principal benefit for companies is efficiency. Many of the new AI-powered tools work in a similar way to spell- and grammar-checkers, enabling coders to reduce the number of keystrokes they need to type by around 50%. They can also spot bugs while code is being written, while they can also automate as many as half of the tests needed to confirm the quality of software.&#8221;<br>This capability is even more important as companies continue to rely on open-source code.<br>The Deloitte report can speed up the coding process significantly by &#8220;reducing the number of keystrokes developers need to type by half, catching bugs even prior to code review or testing, and automatically generating half of the tests needed for quality assurance.&#8221;</p>



<p>According to the report, these tools are best suited for these elements of the software development process:</p>



<ol class="wp-block-list"><li>Project requirements</li><li>Coding, review and bug detection, and resolution</li><li>More thorough testing</li><li>Deployment</li><li>Project management</li></ol>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/">Machine learning tool trains on old code to spot bugs in new code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google cancels its April Fools’ pranks this year due to the pandemic</title>
		<link>https://www.aiuniverse.xyz/google-cancels-its-april-fools-pranks-this-year-due-to-the-pandemic/</link>
					<comments>https://www.aiuniverse.xyz/google-cancels-its-april-fools-pranks-this-year-due-to-the-pandemic/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 30 Mar 2020 10:20:09 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Pandemic]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7835</guid>

					<description><![CDATA[<p>Source: hindustantimes.com Google will not be taking part in its annual April Fool’s prank this year owing to the Covid-19 pandemic. This was an expected move especially at a time when the entire world is grappling with the virus outbreak. Google hasn’t officially announced it is cancelling this year’s April Fool’s joke. Business Insider (via <a class="read-more-link" href="https://www.aiuniverse.xyz/google-cancels-its-april-fools-pranks-this-year-due-to-the-pandemic/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-cancels-its-april-fools-pranks-this-year-due-to-the-pandemic/">Google cancels its April Fools’ pranks this year due to the pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: hindustantimes.com</p>



<p>Google will not be taking part in its annual April Fool’s prank this year owing to the Covid-19 pandemic. This was an expected move especially at a time when the entire world is grappling with the virus outbreak.</p>



<p>Google hasn’t officially announced it is cancelling this year’s April Fool’s joke. Business Insider (via The Verge) obtained an internal email which details the company’s decision to cancel the event. Google in its mail says that the decision to do so is “out of respect for all those fighting the Covid-19 pandemic”. It will however continue its tradition next year which is expected to “undoubtedly be a whole lot brighter than this one”.</p>



<p>Google also points out in the mail that it has already stopped centralised April Fool’s pranks but requested other teams to cancel their projects too. “Please suss out those efforts and make sure your teams pause on any jokes they may have planned — internally or externally,” the email read.</p>



<p>Google’s April Fool’s pranks usually involve new products with bizarre features. Last year it introduced Google Tulip, an AI that can understand what tulips are saying and in dozens of human languages. Google Japan even introduced a smart spoon that can bend and with support for microUSB and Bluetooth. It even created a Github project to make the smart spoon believable.</p>



<p> Google’s decision to stop April Fool’s pranks will most likely be followed by other companies. The search giant has on the other hand been making efforts amid the Covid-19 pandemic. Google earlier today announced it is pledging $800 million to pandemic-hit businesses and health agencies. Google will offer this in the form of cash, ad credits, and cloud services. </p>
<p>The post <a href="https://www.aiuniverse.xyz/google-cancels-its-april-fools-pranks-this-year-due-to-the-pandemic/">Google cancels its April Fools’ pranks this year due to the pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-cancels-its-april-fools-pranks-this-year-due-to-the-pandemic/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google AI plants SEED for better scalable reinforcement learning</title>
		<link>https://www.aiuniverse.xyz/google-ai-plants-seed-for-better-scalable-reinforcement-learning/</link>
					<comments>https://www.aiuniverse.xyz/google-ai-plants-seed-for-better-scalable-reinforcement-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 25 Mar 2020 07:04:24 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7699</guid>

					<description><![CDATA[<p>Source: devclass.com Google AI researchers have looked into ways of making reinforcement learning scale better and improve computational efficiency. The result is called SEED RL and can now be explored via GitHub. SEED stands for scalable, efficient, deep reinforcement learning and describes a “modern RL agent that scales well, is flexible and efficiently utilises available <a class="read-more-link" href="https://www.aiuniverse.xyz/google-ai-plants-seed-for-better-scalable-reinforcement-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-plants-seed-for-better-scalable-reinforcement-learning/">Google AI plants SEED for better scalable reinforcement learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: devclass.com</p>



<p>Google AI researchers have looked into ways of making reinforcement learning scale better and improve computational efficiency. The result is called SEED RL and can now be explored via GitHub.</p>



<p>SEED stands for scalable, efficient, deep reinforcement learning and describes a “modern RL agent that scales well, is flexible and efficiently utilises available resources”. In their research paper on the project, Lasse Espeholt and his colleagues cite the possibility of training agents on millions of frames per second and lowering the cost of experiments as the approache’s key benefits, potentially opening RL up to a wider audience.</p>



<p>Reinforcement learning is a very use-case specific approach in which agents learn about their environment through exploration and optimise their actions to get the most rewards.</p>



<p>Since the method however needs quite a lot of data to produce good results, distributed learning in combination with accelerators such as GPUs can be a means to achieve that in a more reasonable manner.</p>



<p>Architectures following a similar approach include distributed agent IMPALA, which, compared to SEED RL, supposedly has a number of drawbacks. It for example keeps sending parameters and intermediate model states between actors and learners, which can quickly turn into a bottleneck. It also sticks to CPUs when applying model knowledge to a problem (inference), which isn’t the most performant option when working with complex models, and, according to Espeholt et al, doesn’t utilise machine resources optimally.</p>



<p>SEED RL solves all this by using a learner to perform neural network inference centrally on GPUs and TPUs, the number of which can be changed depending on need. The system also includes a batching layer to collect data from multiple actors for added efficiency. Since the model parameters and the state are kept local, data transfer is less of an issue, while observations are sent through a low latency network based on gRPC to keep things running smoothly.</p>



<p>The SEED RL implementation is based on the TensorFlow 2 API and can be found on GitHub. It uses policy gradient-based V-trace for predicting action distributions to sample actions from, and Q-learning method R2D2 to select an action based on the predictions. </p>



<p>Though their results have to be taken with a grain of salt, as is advised for all research, first benchmarks promise a significant increase of the number of computable frames per second when compared to IMPALA for cases where accelerators are an option. Costs are also meant to reduce in certain scenarios since inference costs are said to be lower when using SEED as opposed to IMPALA’s CPU heavy approach. More details are available on the Google AI blog.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-plants-seed-for-better-scalable-reinforcement-learning/">Google AI plants SEED for better scalable reinforcement learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-ai-plants-seed-for-better-scalable-reinforcement-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI startup accuses Facebook of stealing code designed to speed up machine learning models on ordinary CPUs</title>
		<link>https://www.aiuniverse.xyz/ai-startup-accuses-facebook-of-stealing-code-designed-to-speed-up-machine-learning-models-on-ordinary-cpus/</link>
					<comments>https://www.aiuniverse.xyz/ai-startup-accuses-facebook-of-stealing-code-designed-to-speed-up-machine-learning-models-on-ordinary-cpus/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 12 Mar 2020 06:58:32 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7376</guid>

					<description><![CDATA[<p>Source: theregister.co.uk An AI startup is suing Facebook and one of its employees for allegedly stealing proprietary software that allows machine learning workloads to run faster on standard processors, eliminating the need for more expensive custom hardware. Neural Magic, founded in 2017 by Nir Shavit and Alex Matveev, describes itself as a &#8220;no-hardware AI&#8221; company. <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-startup-accuses-facebook-of-stealing-code-designed-to-speed-up-machine-learning-models-on-ordinary-cpus/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-startup-accuses-facebook-of-stealing-code-designed-to-speed-up-machine-learning-models-on-ordinary-cpus/">AI startup accuses Facebook of stealing code designed to speed up machine learning models on ordinary CPUs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: theregister.co.uk</p>



<p>An AI startup is suing Facebook and one of its employees for allegedly stealing proprietary software that allows machine learning workloads to run faster on standard processors, eliminating the need for more expensive custom hardware.</p>



<p>Neural Magic, founded in 2017 by Nir Shavit and Alex Matveev, describes itself as a &#8220;no-hardware AI&#8221; company. Instead of relying on GPU chips that are able to crunch through matrix maths operations to run machine-learning models quickly, the Boston-based upstart employs nifty software tricks to achieve similar speeds on CPUs.</p>



<p>Court documents filed (PDF) in the District Court of Massachusetts last week claim that Neural Magic&#8217;s first employee, Aleksandar Zlateski, breached the non-disclosure and non-competition agreement he signed when he joined as the company&#8217;s technology director. Zlateski left to join Facebook and allegedly stole his former employer&#8217;s secret algorithms to give to his new team.</p>



<p>That code, describing how to perform low-precision matrix multiplication to run trained computer vision models, was then published by Facebook engineers on GitHub last year in November. Facebook also released a compiler known as &#8220;Sparse GEMM JIT&#8221; as part of its efforts to expand PyTorch, a machine learning framework commonly used by AI developers.</p>



<p>Neural Magic claims in the filing that &#8220;the code and compiler Facebook posted to GitHub implement the same Neural Magic Algorithms used in Neural Magic&#8217;s compiler code, to achieve the same computational and storage efficiencies running on commodity hardware (CPUs).</p>



<p>&#8220;Indeed, Neural Magic has tested the Facebook compiler side-by-side against its compiler, and the results from this direct comparison establish that the algorithms implemented in the Facebook compiler are the Neural Magic Algorithms.&#8221;</p>



<p>On top of that, the technical lead and manager at Facebook AI Systems Co-design, Jongsoo Park, even singled out Zlateski for his contributions on the project&#8217;s GitHub depository, Neural Magic&#8217;s complaint argued.</p>



<p>But when the startup wrote Facebook and Zlateski letters in an attempt to get the social media giant to take down its code from GitHub, both parties declined. &#8220;In a series of letters, counsel for Facebook and Zlateski flatly refused to take down the code or agree to cease further use of Neural Magic&#8217;s proprietary and confidential information that Zlateski misappropriated as a Facebook employee,&#8221; the court document said.</p>



<p>In retaliation for that refusal, Neural Magic decided to take legal action. Now it believes that Zlateski breached the non-disclosure and non-competition agreement he signed in March 2018. Neural Magic has also claimed that its former employee and the social media giant have violated trade secret laws. It is now seeking &#8220;punitive damages&#8221;, an injunction to prevent Facebook and Zlateski from using its proprietary algorithms, and wants to slurp Zlateski&#8217;s Facebook assets and shares.</p>



<p>Zlateski was employed at Neural Magic from March 2018 to July 2019. He was offered a base salary of $165,000 per year as technology director and was also given the opportunity to purchase the startup&#8217;s shares. Neural Magic raised $15m in seed investment led by Comcast Ventures and other VC firms including NEA, Andreessen Horowitz, Pillar VC and Amdocs in November last year.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-startup-accuses-facebook-of-stealing-code-designed-to-speed-up-machine-learning-models-on-ordinary-cpus/">AI startup accuses Facebook of stealing code designed to speed up machine learning models on ordinary CPUs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-startup-accuses-facebook-of-stealing-code-designed-to-speed-up-machine-learning-models-on-ordinary-cpus/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Uber Open-Sources Plug-and-Play Language Model for Controlling AI-Generated Text</title>
		<link>https://www.aiuniverse.xyz/uber-open-sources-plug-and-play-language-model-for-controlling-ai-generated-text/</link>
					<comments>https://www.aiuniverse.xyz/uber-open-sources-plug-and-play-language-model-for-controlling-ai-generated-text/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 02 Jan 2020 07:43:37 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[language model]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5925</guid>

					<description><![CDATA[<p>Source: infoq.com Uber AI open-sourced their plug-and-play language model (PPLM) which can control the topic and sentiment of AI-generated text. The model&#8217;s output is evaluated by human judges as achieving 36% better topic accuracy compared to the baseline GPT-2 model. The team provided a full description of the system and experiments in a paper published <a class="read-more-link" href="https://www.aiuniverse.xyz/uber-open-sources-plug-and-play-language-model-for-controlling-ai-generated-text/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/uber-open-sources-plug-and-play-language-model-for-controlling-ai-generated-text/">Uber Open-Sources Plug-and-Play Language Model for Controlling AI-Generated Text</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>Uber AI open-sourced their plug-and-play language model (PPLM) which can control the topic and sentiment of AI-generated text. The model&#8217;s output is evaluated by human judges as achieving 36% better topic accuracy compared to the baseline GPT-2 model.</p>



<p>The team provided a full description of the system and experiments in a paper published on arXiv. PPLM starts with a pre-trained  language model (LM), such as GPT-2. These LMs can produce complex output which approaches human fluency, but it is difficult to control the specific properties of the generated text. Instead of &#8220;fine-tuning&#8221; the LM with additional training data, PPLM uses a separate attribute model that can evaluate the LM&#8217;s output for sentiment or topic; this model is used to control the text produced by the LM. A strength parameter can tune how much the attribute model adjusts the LM output. According to Uber&#8217;s researchers,</p>



<p>PPLM allows a user to flexibly plug in one or more simple attribute models representing the desired control objective into a large, unconditional LM. The method has the key property that it uses the LM as is—no training or fine-tuning is required—which enables researchers to leverage best-in-class LMs even if they do not have the extensive hardware required to train them.</p>



<p>Recent state-of-the-art NLP research has focused on creating pre-trained models based on the transformer architecture. These models are large, containing hundreds of millions of parameters, and are trained on large datasets containing millions of words; the training may take several days of runtime on expensive GPU hardware. Researchers without the resources to train their own state-of-the-art models must often choose to use a publicly available model that isn&#8217;t quite suited for their task, or go with a smaller, less accurate model of their own. Another alternative is to fine-tune a pretrained model, but that presents the risk of catastrophic forgetting.</p>



<p>The key to PPLM is to use an additional, simpler model, the attribute model (AM), that can score the output of the LM; in particular, it calculates the probability that the LM&#8217;s output text has some attribute (for example, that the text has positive sentiment, or is about politics). The AM can also calculate the gradient of that probability, which is used to &#8220;steer&#8221; the LM; the transformer-based LMs are &#8220;autoregressive,&#8221; meaning that as they generate a sequence of words, the previously generated word becomes an input to the system for creating the next word. In PPLM, the gradient of the AM is also used to generate the next word, such that it is more likely to contain the desired attribute.</p>



<p>Uber highlighted the &#8220;pluggable&#8221; nature of PPLM with other techniques that require training and fine-tuning the full model. For example, a team from Google Brain presented a paper at last year&#8217;s NeurIPS conference that uses a generative-adversarial technique made popular by deep-learning &#8220;style-transfer&#8221; image processing systems. OpenAI created a system that uses reinforcement learning (RL) to incorporate human feedback in fine-tuning a GPT-2 LM. On Hacker News, user Gwern Branwen wrote:</p>



<p>What&#8217;s particularly nice [about PPLM] is if you can plug in a classifier for things like esthetics based on human ratings, along the lines of [OpenAI&#8217;s system] but better &#8211; why spend the enormous effort running [RL] to brute force the classifier to obtain desired text or image output, when you can just backprop through it and let the classifier itself tell you how exactly to improve the inputs?</p>



<p>PPLM source code is available on GitHub. A demo is also available on NLP research site HuggingFace and via a Google Colab notebook.</p>
<p>The post <a href="https://www.aiuniverse.xyz/uber-open-sources-plug-and-play-language-model-for-controlling-ai-generated-text/">Uber Open-Sources Plug-and-Play Language Model for Controlling AI-Generated Text</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/uber-open-sources-plug-and-play-language-model-for-controlling-ai-generated-text/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
