<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Code Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/code/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/code/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 02 Jul 2021 10:22:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Neo- Panopticism, Big Data, and Code of Ethics</title>
		<link>https://www.aiuniverse.xyz/neo-panopticism-big-data-and-code-of-ethics/</link>
					<comments>https://www.aiuniverse.xyz/neo-panopticism-big-data-and-code-of-ethics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 02 Jul 2021 10:22:12 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Code]]></category>
		<category><![CDATA[Ethics]]></category>
		<category><![CDATA[Panopticism]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14720</guid>

					<description><![CDATA[<p>Source &#8211; https://moderndiplomacy.eu/ Ever imagined how we type and scroll through one website and get thousands of recommendations on similar topics on other media platforms? In the <a class="read-more-link" href="https://www.aiuniverse.xyz/neo-panopticism-big-data-and-code-of-ethics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/neo-panopticism-big-data-and-code-of-ethics/">Neo- Panopticism, Big Data, and Code of Ethics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://moderndiplomacy.eu/</p>



<p>Ever imagined how we type and scroll through one website and get thousands of recommendations on similar topics on other media platforms? In the digital world, the process of digitisation has been optimised through the Big Data Revolution. It is now considered as the ‘New Gold’. Based on ‘one like’ a person’s personal choices ranging from clothes, food, politics everything could be analysed and enumerated. This data is then used by companies to sell the apt products or services based on our preferences. We are dependent on various applications for booking appointments, paying bills and also to make some quick decisions for instance, finance, insurance or stock management.  The life between online and offline has been significantly blurred and is now present in almost all aspects of our life.</p>



<p>This reminds me of an architectural design made by the 18th-century philosopher Jeremy Bentham. It was an annular building on the periphery, at the centre there was a tower with large windows opening towards the inner side of the ring. The inner structure was divided into small cells with two windows. One corresponds to the tower and another allows light to pass across the hall. And by placing one supervisor, a principal, or an inspector at the centre of the tower it could turn into a mental asylum, a school, or a prison. Michael Foucault in his work ‘ Discipline and Punish: The Birth of the Prison’ uses this Panopticon model to explain the Genesis of power. The entire system is a visual trap. The person in the cell would never know whether somebody is watching them or not. Power in this design becomes ‘unverifiable’. Anyone can gaze through the tower thus a capillary action of power is created rather than a single unit of power. Bentham devised this idea for disciplinary purposes in several institutions.</p>



<p>Today panopticism as a metaphor can be used to define technological surveillance. Since power is exercised over us and our decision-making is invisible and unverifiable we do not explicitly feel being violated. While downloading an app, or giving acceptance to certain access on our phone we do not analyse the consequences of it. As our human mind is conditioned to focus on results and to maximise desires, we tend to ignore threats that are certainly looming over us all the time. George Orwell’s ‘Big Brother’ is now transformed into an invisible power wherein our choices and rights are not limited, we are not living in an authoritarian state rather we are living in a state of illusion. Data is Controlling our search optimisation techniques.</p>



<p>After the infamous revelation of the surveillance system of United States investigative agencies by the whistleblower Edward Snowden, people and scholars started to identify the ethical issues surrounding privacy, big data, and Governance. Further, after the US Presidential elections in 2016, this concern was alleviated by a controversy. Scholars have termed this kind of technology as persuasive technology. Digital panopticism is controlling and changing our behavioural patterns.</p>



<p>Many countries have now adopted digital media codes or rules and regulations to restrict the misuse of the data collected by various online platforms. In India, the recent Information Technology (IntermediaryGuidelines and Digital Media Ethics Code) Rules 2021 is also laid down on similar lines. The government has described these rules as a soft-touch self-regulatory mechanism. All media platforms will have to set up a grievances redressal and compliance mechanism, which should constitute a resident grievance officer, chief compliance officer, and a nodal contact person. The Ministry of Electronics &amp; Information Technology has further ordered platforms to submit monthly reports on complaints received from users and actions taken. Finally, instant messaging apps will have to make provisions  for tracking the first originator of a message in case it is asked by legitimate authorities. Apprehensions raised by companies are related to the latter part of the rule.</p>



<p>Media platforms will have to accept the rules for the greater good. However, both sides will have to reconcile and find a middle way by ensuring safety to the citizens. On the other hand, specific rules will have to be laid down as to stating the purpose of tracking messages and how the data will be utilised.</p>



<p>The government’s initiative is timely as technology is outgrowing the legal-justice system. The new era is going to be the age of the digital world, however ethical themes as enshrined in international treaties and our constitution must always be upheld. Human dignity and right to privacy as under Fundamental Rights and the Universal Declaration of Human Rights (UDHR) must guide the policies and actions of various entities. Values such as autonomy, equal power relationships, and control over technology are not explicitly named in the treaties but can be seen as part of following these fundamental and human rights.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/neo-panopticism-big-data-and-code-of-ethics/">Neo- Panopticism, Big Data, and Code of Ethics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/neo-panopticism-big-data-and-code-of-ethics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What Is Meta-Learning via Learned Losses (with Python Code)</title>
		<link>https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/</link>
					<comments>https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Mar 2021 07:07:28 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[Code]]></category>
		<category><![CDATA[Learned]]></category>
		<category><![CDATA[Losses]]></category>
		<category><![CDATA[meta-learning]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13145</guid>

					<description><![CDATA[<p>Source &#8211; https://analyticsindiamag.com/ Facebook AI Research (FAIR) research on meta-learning has majorly classified into two types:  First, methods that can learn representation for generalization. Second, methods that <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/">What Is Meta-Learning via Learned Losses (with Python Code)</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://analyticsindiamag.com/</p>



<p>Facebook AI Research (FAIR) research on meta-learning has majorly classified into two types:  First, methods that can learn representation for generalization. Second, methods that can optimize models. We have thoroughly discussed the type first in our previous article MBIRL. For this post, we are going to give a brief introduction to the second type. Last month, at the International Conference on Pattern Recognition, {ICPR}, Italy, January 10-15, 2021, a group of researchers: <em>S. Bechtle,  A. Molchanov, Y. Chebotar, E. Grefenstette, L. Righetti, G. S. Sukhatme, F. Meier</em> submitted a research paper focussing on the automation of “meta-training” processing: <strong>Meta Learning via Learned Loss</strong>.</p>



<p><strong>Motivation Behind ML</strong><strong><sup>3</sup></strong></p>



<p>In meta-learning, the goal is to efficiently optimize the function <em>f<sub>θ </sub> </em>which can be a regressor or classifier that finds the optimal value of <em>θ</em>. <em>L</em> is the loss function and <em>h</em> is the gradient transform. The majority of the work in deep learning is associated with learning the <em>f </em>function directly from data and some meta-learning work focuses on the parameter updation. In <strong>ML<sup>3</sup></strong> approach, the authors have targeted loss learning. Loss functions are architecture independent and widely used for learning problems so learning a loss function doesn’t require any engineering and optimization and allows the addition of extra information during meta-training.</p>



<p>The key idea of the proposed framework is to develop a pipeline for meta-training that not only can optimize the performance of the model but also generalize for different tasks and model architectures. The proposed framework of learning loss functions efficiently optimize the models for new tasks. The main contribution of the ML<sup>3</sup>&nbsp;framework are :</p>



<p>i) It is capable of learning adaptive, high-dimensional functions via back propagation and gradient descent.</p>



<p>ii) The given framework is very flexible as it is capable of storing additional information at the meta-train time and provides generalization by solving regression, classification, model-based reinforcement learning, model-free reinforcement learning.</p>



<p><strong>The Model Architecture of ML</strong><strong><sup>3</sup></strong></p>



<p>The task of learning a loss function is based on a bi-level optimization technique i.e., it contains two optimization loops: inner and outer. The inner loop is responsible for training the model or<em> optimizee </em>with gradient descent by using the loss function learners meta-loss function and the outer loop optimized the meta-loss function by minimizing the task loss i.e., regression or classification or reinforcement learning loss. </p>



<p>The process contains a function <em>f </em>parameterized by <em>θ</em> that takes a variable <em>x</em> and outputs <em>y</em>. It also learns meta-loss network <em>M</em> parameterized by <em>Φ </em> that takes the input and output of function <em>f</em> and together with task-specific information <em>g </em>(for example ground truth label for regression or classification, final position in MBIRL or the sample reward from model-free reinforcement learning problems) and outputs the meta- loss function <em>L</em> parameterized by both <em>Φ </em>and <em>θ</em>. </p>



<p>So, to update function <em>f</em>, compute the gradient of Meta Loss <em>L</em> with respect to <em>θ </em>and update the gradient using the learned loss function, as shown below :</p>



<p>Now, to update <em>M, </em>the loss network, formulate a task-specific loss that compares the output of the currently optimal <em>f </em>with the target information since <em>f </em>is updated with<em> L</em>, the task is also functional <em>Φ </em>and perform gradient update on <em>Φ </em>to optimize M. This architecture finally forms a fully differential loss learning framework used for training.</p>



<p>To use the learning loss  at Test time, directly update <em>f </em>by taking the gradient of learned loss <em>L</em> with respect to the parameters of <em>f.</em></p>



<p><strong>Applications of ML</strong><strong><sup>3</sup></strong></p>



<ol class="wp-block-list"><li>Regression problems.</li><li>Classification problems.</li><li>Shaping Loss during training e.g., Covexifying Loss, exploration signal. ML<sup>3</sup>&nbsp;provides a possibility to add additional information during meta-training.</li><li>Model-based Reinforcement Learning.</li><li>Model-free Reinforcement Learning.</li></ol>



<p><strong>Requirements &amp; Installation</strong></p>



<ol class="wp-block-list"><li>Python=3.7</li><li>Clone the Github repository via <em>git</em>.</li><li>Install all the dependencies of ML<sup>3</sup> via :</li></ol>



<p><strong>Paper Experiment Demos</strong></p>



<p>This section contains different experiments mentioned in the research paper.</p>



<p><strong>A. Loss Learning for Regression</strong></p>



<ol class="wp-block-list"><li>Run Sin function regression experiment by code below:</li></ol>



<ol class="wp-block-list" start="2"><li>Now, you can visualize the results by the following code:</li></ol>



<p>2.1 Import the required libraries, packages and modules and specify the path to the saved data during meta-training. The code snippet is available here.</p>



<p>2.2 Load the saved data during the experiment.</p>



<p>2.3 Visualize the performance of the meta loss when used to optimize the meta training tasks, as a function of (outer) meta training iterations.</p>



<p>2.4 Evaluating learned meta loss networks on test tasks. Plot the performance of the final meta loss network when used to optimize the new test tasks at meta test time. Here the x-axis represents the number of gradient descent steps. The code snippet is available here.</p>



<p><strong>C. Learning with extra information at the meta-train time</strong></p>



<p>This demo shows how we can add extra information during meta training in order to shape the loss function. For experiment purposes, we have taken the example of sin function. Now, with the code, the script requires two arguments, first one is train\test, the 2nd one indicates whether to use extra information by setting True\False (with\without extra info).</p>



<ol class="wp-block-list"><li>For training, the code is given below</li><li>To test the loss with extra information run:</li></ol>



<ol class="wp-block-list" start="3"><li>For comparison purposes, we have repeated the above two steps with argument as <em>False</em>. The full code is available here.</li><li>Comparison of results via visualization.</li></ol>



<p>Similarly, the research experiment for meta learning the loss with an additional goal in the mountain car experiment run can be done. The code lines is available here.</p>



<p><strong>EndNotes</strong></p>



<p>In this write-up we have given an overview Meta Learning via Learned Loss(ML<sup>3</sup>), a gradient-based bi-level optimization algorithm which is capable of learning any parametric loss function as long as the output is differential with respect to its parameters. These learned loss functions can be used to efficiently optimize models for new tasks.</p>



<p><strong>Note :</strong>&nbsp;All the figures/images except the output of the code are taken from official sources of ML<sup>3</sup>.</p>



<ul class="wp-block-list"><li><strong>Colab Notebook ML<sup>3</sup> Demo</strong></li></ul>



<p>Official Code, Documentation &amp; Tutorial are available at:</p>



<ul class="wp-block-list"><li>Github </li><li>Website </li><li>Research Paper</li></ul>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/">What Is Meta-Learning via Learned Losses (with Python Code)</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>6 Advantages of Microservices</title>
		<link>https://www.aiuniverse.xyz/6-advantages-of-microservices/</link>
					<comments>https://www.aiuniverse.xyz/6-advantages-of-microservices/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 16 Oct 2020 07:08:46 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Code]]></category>
		<category><![CDATA[DISTRIBUTED APPLICATIONS]]></category>
		<category><![CDATA[modular]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12272</guid>

					<description><![CDATA[<p>Source: devops.com Microservices have recently gained in popularity, but you may be unsure whether this architecture is right for your environment. What’s great is microservices are not <a class="read-more-link" href="https://www.aiuniverse.xyz/6-advantages-of-microservices/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/6-advantages-of-microservices/">6 Advantages of Microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: devops.com</p>



<p>Microservices have recently gained in popularity, but you may be unsure whether this architecture is right for your environment. What’s great is microservices are not necessarily a new beast, as the concepts behind them have been a solid part of software development for decades. Topics such as modular programming, separation of concerns and service-oriented architecture (SOA) all align with the objectives of a microservices architecture. In fact, many development teams have adopted microservices without necessarily calling them that—event-driven architectures are one example.</p>



<p>Before discussing the advantages of microservices, let’s make sure we’re aligned on a definition and the key principles:</p>



<ul class="wp-block-list"><li>Microservices are a set of software applications that work together, each designed with a limited functional scope.</li><li>They work with each other to form a larger solution.</li><li>Each microservice has minimal capabilities for the sake of creating a highly modularized overall architecture.</li></ul>



<p>Does this sound like an architecture your development team has been utilizing? If not, there are several advantages for use within the enterprise, especially as businesses build more complex solutions for customers.</p>



<h3 class="wp-block-heading">Easier to Build, Easier to Enhance</h3>



<p>Microservices are “micro,” requiring much less code than their monolithic application counterparts. The sum of code may compare to a monolithic application, but it’s the physical separation of code that draws cleaner, distinct lines between different functions for microservices. They also excel in experimentation and testing because the focus is on one small feature or capability, making incremental code updates simpler. As a result, companies increase agility, bringing higher-quality systems that have less complex code, require lower testing effort, support easier unit testing and reduce problem risk.</p>



<p>Microservices also free users from concerns of how one task will affect the next. When coupled with a messaging system such as Apache Kafka, they can actually simplify the development process. Each microservice can write to the messaging system in a standardized format the next can understand—no strict messaging format required. By opening the architecture to a system of smaller applications you can also base each microservice on almost any programming language. This grants you freedom in a few ways because:</p>



<ul class="wp-block-list"><li>A variety of development teams can work together on a single microservice architecture without commonality on technologies.</li><li>Each team can use a software stack they choose for their specific skills.</li><li>You can incrementally incorporate or experiment with new technologies.</li></ul>



<p>Gone are the days of worrying about updates impacting your system or waiting to update all your microservices at once, too. As long as you adhere to the simple input and output messaging formats, you can update a single microservice at your leisure without having to shut down the entire system.</p>



<h3 class="wp-block-heading">Deploy With No Hesitation</h3>



<p>Microservices are easier to deploy than monolithic applications for the same reason they’re easier to build and enhance: they’re small and modular. Consider the various dependencies between development and production environments like OS or the amount of random-access memory (RAM) and you begin to understand how complex deployment within a tech stack can be.</p>



<p>By reducing dependencies to a smaller scope, thus decreasing the potential for dependency conflicts, microservices become advantageous for deploying in a container with other virtualized technologies. Further, there are no limits to a single deployment option. Whether on-premises, in the cloud, in serverless environments or at the edge—which will continue to gain in significance and popularity—the possibilities are nearly endless.</p>



<h3 class="wp-block-heading">Maintain, Troubleshoot and Extend With Ease</h3>



<p>Maintenance is key to making our most valuable investments last. This is especially true for large-scale software environments where monolithic architectures can prove challenging. Microservices offer a way to seamlessly enable ongoing maintenance and fault tolerance, as well as troubleshooting, as individual outputs make isolating the source of problems easier.</p>



<p>Moreover, single points of failure are not an issue because the distributed nature of microservices means any given one can be deployed redundantly, in parallel, and on a continuous basis. If any single microservice for a given task fails, the others associated will pick up the slack.</p>



<p>Another benefit is they help extend your systems to provide more outputs. Making incremental updates is relatively low-risk because microservices are highly modular and can be plugged in easily with new or updated code. Should an unforeseen error occur, you needn’t fret about shutting down the entire system and stopping the flow of data. Instead, you can shut down a given task and temporarily stop data flow, with the messages queued in the messaging layer to be read once it is restored.</p>



<h3 class="wp-block-heading">Simplify Cross-Team Coordination</h3>



<p>Many companies overcomplicate large projects and software development is no different, with integration points serving as cumbersome, headache moments. In a microservice, internal workflows are as straightforward as:</p>



<ul class="wp-block-list"><li>Reading data from a source.</li><li>Performing an action on the data.</li><li>Sending output to a destination.</li></ul>



<p>Since microservices are designed to navigate small bits of data at a time from an otherwise large data set, managing and sharing output is straightforward. Teams simply coordinate what their respective microservices do and how the exchanged messages are formatted. Lightweight messaging systems are recommended for allowing communication, as they further support simplifying coordination across development teams by providing a fast and reliable means for exchanging data.</p>



<h3 class="wp-block-heading">Speed and Scalability</h3>



<p>It may seem with the many independent, moving parts in a microservices architecture that performance will take a severe hit, whereas a monolith runs as a single process. However, keep in mind there is a clearer opportunity for parallelism with microservices, so any task can be spread across multiple CPUs/cores and be run with much higher overall throughput. For example, machine learning models that require complex calculations in addition to lookups in external databases can be parallelized to process more data at once.</p>



<p>The scalability of microservices is tied in with this capability. As the workload grows with more and faster data, additional microservices can be deployed to run in mirror to spread the load across further hardware resources. In contrast, refactoring a monolithic application to handle more load—which will require significant changes—potentially creates greater risk for introducing errors.</p>



<h3 class="wp-block-heading">Streamline Real-Time Processing</h3>



<p>Demands for immediacy are only continuing to grow. As a result, businesses must increase performance and function in real-time to stay ahead. Microservices provide that competitive advantage by enabling efficient, instantaneous processing and streaming architectures.</p>



<p>Real-time processing is centered on fast data, often in the form of streaming. As data continues to flow in from a source such as an internet of things (IoT) device, applications must keep up with the input rate and operate reliably. A microservices architecture provides the parallelism and redundancy to maintain the load and respond quickly in a consistently reliable way. Additionally, as the flow increases, more microservices can be deployed to accommodate the growth. This structure also allows easy identification of bottlenecks that can be addressed to maintain immediate responsiveness.</p>



<p>The swath of consumer data available to enterprise business today is at a scale never seen before. Thinking strategically about how to handle and process it for the benefit of the business—and the consumer—is an important priority today. Microservices are just one strategy to leverage on this journey and could serve as the advantageous choice for increasing performance, speed and ease on the workloads of many.</p>
<p>The post <a href="https://www.aiuniverse.xyz/6-advantages-of-microservices/">6 Advantages of Microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/6-advantages-of-microservices/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning tool trains on old code to spot bugs in new code</title>
		<link>https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 May 2020 07:12:16 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[bugs]]></category>
		<category><![CDATA[Code]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Tools]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8906</guid>

					<description><![CDATA[<p>Source: techrepublic.com Altran has released a new tool that uses artificial intelligence (AI) to help software engineers spot bugs during the coding process instead of at the end. Available <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/">Machine learning tool trains on old code to spot bugs in new code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techrepublic.com</p>



<p>Altran has released a new tool that uses artificial intelligence (AI) to help software engineers spot bugs during the coding process instead of at the end.</p>



<p>Available on GitHub, Code Defect AI uses machine learning (ML) to analyze existing code, spot potential problems in new code, and suggest tests to diagnose and fix the errors.</p>



<p>Walid Negm, group chief innovation officer at Altran, said that this new tool will help developers release quality code quickly.</p>



<p>&#8220;The software release cycle needs algorithms that can help make strategic judgments, especially as code gets more complex,&#8221; he said in a press release.</p>



<p>Code Defect AI uses several ML techniques including random decision forests, support vector machines, multilayer perceptron (MLP) and logistic regression. The platform extracts, processes and labels historical data to train the algorithm and build a reliable decision model. Developers can use a confidence score from Code Defect AI that predicts whether the code is compliant or buggy.</p>



<p>Here is how Code Defect AI works:</p>



<ol class="wp-block-list"><li>For an open source GitHub project, historical data is collected using RESTFul interfaces and Git CLI. This data includes complete commit history and complete bugs history.</li><li>Preprocessing techniques like feature identification, label encoding, one hot encoding, data scaling and normalization are applied to the collected historical commit data.</li><li>Labelling is performed on the preprocessed data. The labelling process involves understanding of the pattern in which the fix commits (where a bug has been closed) are tagged for each of the closed issues. After the fix commits are collected, the commits which introduced the bugs are identified by backtracking on historical changes for each file in a fix commit.</li><li>If a data set contains a very small amount of bug data as compared with clean records, synthetic data is also generated to avoid bias toward the majority class.</li><li>Multiple modelling algorithms are trained on the data prepared.</li><li>Once there is a model that has acceptable value of precision and recall, the selected model is deployed for prediction on new commits.</li></ol>



<p>Code Defect AI supports integration with third-party analysis tools and can help identify bugs in a given program code. Also, the Code Defect AI tool allows developers to assess which features in the code should take higher priority in terms of bug fixes.</p>



<p>&#8220;Microsoft and Altran have been working together to improve the software development cycle, and Code Defect AI, powered by Microsoft Azure, is an innovative tool that can help software developers through the use of machine learning,&#8221; said David Carmona, general manager of AI marketing at Microsoft, in a press release.</p>



<p>Code Defect AI can be hosted on premises as well as on cloud computing platforms such as Microsoft Azure. The solution can be integrated with other source-code management tools as needed.</p>



<h3 class="wp-block-heading">AI employee joins the dev team</h3>



<p>In a new report about artificial intelligence and software development, Deloitte predicts that more and more companies will use AI-assisted coding tools. From January 2018 to September 2019, software vendors launched dozens of AI-powered software development tools and startups working in this space, and raised $704 million over a similar timeframe.</p>



<p>The biggest benefit from these platforms is efficiency, according to Deloitte analysts David Schatsky and Sourabh Bumb, the authors of &#8220;AI is helping to make better software:&#8221;<br>&#8220;The benefits of AI-assisted coding are numerous. However, the principal benefit for companies is efficiency. Many of the new AI-powered tools work in a similar way to spell- and grammar-checkers, enabling coders to reduce the number of keystrokes they need to type by around 50%. They can also spot bugs while code is being written, while they can also automate as many as half of the tests needed to confirm the quality of software.&#8221;<br>This capability is even more important as companies continue to rely on open-source code.<br>The Deloitte report can speed up the coding process significantly by &#8220;reducing the number of keystrokes developers need to type by half, catching bugs even prior to code review or testing, and automatically generating half of the tests needed for quality assurance.&#8221;</p>



<p>According to the report, these tools are best suited for these elements of the software development process:</p>



<ol class="wp-block-list"><li>Project requirements</li><li>Coding, review and bug detection, and resolution</li><li>More thorough testing</li><li>Deployment</li><li>Project management</li></ol>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/">Machine learning tool trains on old code to spot bugs in new code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-tool-trains-on-old-code-to-spot-bugs-in-new-code/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>When Computers Create Code, Who Owns It Is a Question Worth Billions</title>
		<link>https://www.aiuniverse.xyz/when-computers-create-code-who-owns-it-is-a-question-worth-billions/</link>
					<comments>https://www.aiuniverse.xyz/when-computers-create-code-who-owns-it-is-a-question-worth-billions/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Jul 2019 13:24:19 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Code]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Create]]></category>
		<category><![CDATA[DeepDream]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[machine]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4147</guid>

					<description><![CDATA[<p>Source: huffingtonpost.in NEW YORK—Google’s DeepDream has generated artwork; the What-If Machine created the characters and story for a West End Musical; music composed by programs was performed in the London <a class="read-more-link" href="https://www.aiuniverse.xyz/when-computers-create-code-who-owns-it-is-a-question-worth-billions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/when-computers-create-code-who-owns-it-is-a-question-worth-billions/">When Computers Create Code, Who Owns It Is a Question Worth Billions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: huffingtonpost.in</p>



<p>NEW YORK—Google’s DeepDream has generated artwork; the What-If Machine created the characters and story for a West End Musical; music composed by programs was performed in the London Symphony in 2012. We talk about jobs that may be lost to automation, but there is scant attention paid to who owns the intellectual property created by machines.</p>



<p>Artificial intelligence technology no longer allows us the luxury to vacillate. With the advent of AI, software and computers will be creating a number of programs and original works. </p>



<p>But in many parts of the world, IP laws have not kept pace with the technology. US copyright and patent statutes for instance traditionally required humans as authors or inventors in order for works to be protected under copyright or patent law.</p>



<p>Part of the problem may be historical—copyright laws for instance, were crafted to address printing press technology (that enabled human beings to copy at little to no cost) and not artificial intelligence and the ability for software programs to generate more programs.</p>



<p>The latter point has a real world impact—if the IP for programs generated from code belongs to the code, it could impact the Indian IT industry, and cost millions their jobs.</p>



<p>Other countries such as the UK and the European Union on the other hand have modified their laws, or are considering proposals to do so in order to address the advances, and breakthroughs in robotics technology. The UK for instance, has done away with the requirement of the human author and is conceivably open to awarding copyright protection to bots created by bots.</p>



<p>The UK, when introducing software programs into the Copyright, Designs and Patents Act in 1988 specifically did away with the requirement of “human author” (when recognizing which works would be eligible for copyright). </p>



<p>In 2017, the Committee on Legal Affairs of the European Parliament released a study on AI and asked for the elaboration of criteria for “own intellectual” creation for copyrightable works produced by computers or robots; thus, paving the way for copyright protection for AI. </p>



<p><strong>Who owns the AI?</strong></p>



<p>The problem of who will be the “owner” of the programs created by programs poses a thorny issue. Some folks point to programs such as Google’s DeepDream art, and argue that the program was a “tool” or paintbrush that enabled human authors’ vision to be manifested and so, the human author would own the IP. Others argue that the programmer who wrote the software or algorithm should be the author and copyright owner.</p>



<p>The law thus far has been silent on who owns the IP when the machine output cannot be predicted by the humans involved. There is a third group which includes technology savants like Elon Musk, who argue that AI created work should be in the public domain, owned by no one. </p>



<p><strong>Will contracts supersede the law?</strong></p>



<p>While we wait for the laws in connection with IP ownership of AI to be developed or crafted, people may (by contract) have unwittingly given up that right. Right now, most employment contracts include a “work for hire” concept. As a result, most employers that use software programmers, writers etc. as employees or contractors include a wide provision that states all work produced during the term of employment or contract will be owned by the employer. The salary or fees paid to create the work is considered adequate compensation, including to transfer ownership in IP rights.&nbsp;</p>



<p>On the face of it, the above may not seem worrisome or particularly important. However, it becomes significant because of the additional complication presented by the following contracting practice.&nbsp;</p>



<p><strong>Why should India’s technology industry worry</strong></p>



<p>As a corollary to the above ‘work for hire’ employment contracts, large corporate customers particularly US clients have consistently insisted that all Indian technology corporations providing services to the US corporation transfer all the IP produced during the provision of, and in connection with the services. Indian technology corporations by and large have not objected to this.</p>



<p>Indeed for decades Indian technology corporations have transferred all IP to the client, believing the secret-sauce was the know-how or knowledge of how to implement the software programs. Hence, by inserting locks or restricting US clients from hiring their software programmers, Indian technology industry was content.&nbsp;</p>



<p>This is about to change. Once copyright protection is recognized for AI code, no one else will be allowed to copy that code for a long time (the life of the author + 50 or 70 years depending upon the jurisdiction). Similarly, if a patent is granted for a code, no one else will be allowed to use that code for a long time (20+ years from the date of patent application).</p>



<p>Hence, if AI is granted formal IP protection, and the US clients own all the formal IP, they could take advantage of the protection offered by copyright and patent laws and be able to prevent Indian companies from performing the same services for other clients or corporations. In other words, if the Indian companies used the same code for other clients they could be sued for copyright or patent infringement. Especially so, because the U.S. has been comfortable recognizing patents for algorithms in connection with AI.&nbsp;</p>



<p><strong>Steps now needed</strong></p>



<p>To continue to ride the technology wave in the AI era, Indian industry will have to adapt quickly and dramatically modify its contracting and negotiating practice. Companies must negotiate hard to retain the formal IP in order to be able to continue to operate their service lines in future. Hence, the risk presented by AI will not only be the loss of US and EU jobs as a result of computerization – estimated by Oxford University’s study as 47% and 54% of the US and EU workers’ jobs and the trickle-down effect on India but also from being able to write programs or perform work for other corporations as a result of AI IP they have written and handed over to their clients. </p>



<p>It is an existential moment for the industry. At risk is not only half the 3.9 million Indians’ jobs&nbsp; or 70% of the Indian IT workforce (as a result of automation) but the $155 billion industry that has been the engine for India’s economy and global image. It is not all dark – if Indian firms were to change strategy and to negotiate to own the IP they create, India may well succeed in riding the technology 2.0 or AI wave and epitomize the latter part of Hawking’s prediction:</p>



<p>“Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” – Stephen Hawking</p>



<p>Aarthi Anand is a leading technology attorney, Vice President at J.P. Morgan, New York, and was a Rhodes Scholar. The views expressed here are those of the author and do not reflect the opinion of the Bank. </p>
<p>The post <a href="https://www.aiuniverse.xyz/when-computers-create-code-who-owns-it-is-a-question-worth-billions/">When Computers Create Code, Who Owns It Is a Question Worth Billions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/when-computers-create-code-who-owns-it-is-a-question-worth-billions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
