<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>discovery Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/discovery/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/discovery/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 03 Apr 2021 06:38:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>AI in Drug Discovery Starts to Live Up to the Hype</title>
		<link>https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/</link>
					<comments>https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 03 Apr 2021 06:38:07 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[discovery]]></category>
		<category><![CDATA[DRUG]]></category>
		<category><![CDATA[Hype]]></category>
		<category><![CDATA[Starts]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13908</guid>

					<description><![CDATA[<p>Source &#8211; https://www.genengnews.com/ The past few years have seen several flashy demonstrations of how artificial intelligence (AI) algorithms may transform biomedical research, particularly with respect to drug <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/">AI in Drug Discovery Starts to Live Up to the Hype</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.genengnews.com/</p>



<p>The past few years have seen several flashy demonstrations of how artificial intelligence (AI) algorithms may transform biomedical research, particularly with respect to drug discovery. This past November, for example, Google’s AI subsidiary, DeepMind, announced that its AlphaFold program could deliver computational predictions of protein structure that approach the quality of those provided by gold-standard experimental techniques such as X-ray crystallography.<sup>1</sup></p>



<p>Such high-profile announcements have elicited justifiable excitement about the future of algorithmically guided drug development, but AI’s champions in the industry remain wary about overselling the technology’s current capabilities. “I still feel that there’s a lot of hype around it,” says Paul Nioi, PhD, senior director of research at Alnylam Pharmaceuticals. “Companies are springing up that claim to solve all the issues of drug discovery, target discovery, and development using AI. I think that’s yet to be proven.”</p>



<p>Nevertheless, a growing number of companies now recognize the value that AI—and more specifically, the subset of algorithmic techniques known as “machine learning”—can deliver at various stages in the drug discovery process. “There’s an increase in investment across all of the companies that I’ve talked to,” relates Peter Henstock, PhD, machine learning and AI lead at Pfizer.</p>



<p>“The capabilities are still being sorted out,” Henstock adds. “In most cases, we’re still negotiating how to go about using it effectively.” But the opportunities are clear, and machine learning–based techniques are already finding a place in early-stage target discovery and drug development workflows—and offering a glimpse of the gains in efficiency and success rates that the future could bring.</p>



<h4 class="wp-block-heading"><strong>A deep dive into the literature</strong></h4>



<p>The vast majority of biomedical data are imprisoned in unstructured formats that are, in their raw form, inaccessible to computational analysis. Data are trapped within publications, patents, clinical records, and other documents that are exclusively targeted at human readers. Natural language processing (NLP) algorithms offer a powerful solution to this problem. These employ a machine learning technique known as deep learning to analyze documents and other datasets and identify biologically relevant text elements such as the names of genes, proteins, drugs, or clinical manifestations of disease.</p>



<p>NLP algorithms can rapidly comb through vast collections of data and identify previously overlooked patterns and relationships that might be relevant to a disease’s etiology and pathology. Henstock’s team used such an approach to scrutinize PubMed’s tens of millions of abstracts. “Just by text mining,” he points out, “we could basically take some genes and figure out what diseases might be related to them.”</p>



<p>The Pfizer group subsequently incorporated other dimensions into its analysis, looking at publication patterns to identify “trending” areas of disease research where rapid scientific progress might offer a solid foundation for rapidly shepherding a drug development program into the clinic. According to Henstock, this approach achieved a greater than 70% success rate in identifying patterns of heightened disease research activity that ultimately gave rise to clinical trials.</p>



<p>The value of NLP-mined data can be greatly amplified by threading together structured data from multiple sources into a densely interconnected “knowledge graph.” “We have huge amounts of data in different areas, like omics—chemical-related, drug-related, and disease-related data,” says Natnael Hamda, an AI specialist at Astellas Pharma. “Ingesting that data and integrating those complex networks of biological or chemical information into one collection is tricky.”</p>



<p>The rewards, however, can be considerable, as the resulting interconnections tell a much richer biological story than any one dataset on its own. For example, these interconnections can enable more sophisticated predictions of how to develop therapeutic agents that safely and effectively target a particular medical condition.</p>



<p>AI-focused startup Healx relies heavily on knowledge graphs assembled from a diverse range of public and proprietary sources to gain new insights into rare genetic diseases. The company’s chief scientific officer, Neil Thompson, PhD, notes that this category encompasses roughly 7,000 diseases affecting on the order of 400 million patients in total—but treatments are available for only 5% of these disorders.</p>



<p>“Our main focus is on finding new uses for old drugs,” says Thompson, “We are working with all the data on the 4,000 FDA-registered drugs, and we are building data on drugs registered elsewhere in the world.” This information is complemented by data provided by the various patient organizations with which Healx collaborates.</p>



<p>According to Thompson, this approach has yielded an excellent success rate, with multiple disease programs yielding candidates that demonstrated efficacy in animal models. One of these programs, a treatment for the intellectual developmental disorder fragile X syndrome, is on track to enter clinical trials later this year.</p>



<h4 class="wp-block-heading"><strong>Getting a clearer picture</strong></h4>



<p>Machine learning is also effective for extracting interesting and informative features from image data. Alnylam has been using computer vision algorithms to profile vast repositories of magnetic resonance imaging (MRI) data collected from various parts of the body in tens of thousands of patients with various medical conditions. “We’re training a model based on people that we know have a certain disease or don’t have a certain disease,” says Nioi, “and we’re asking the model to differentiate those two categories based on features it can pick up.”</p>



<p>One of the lead indications for this approach is nonalcoholic steatohepatitis (NASH), a hard-to-treat condition in which fat accumulation in the liver contributes to inflammation and scarring—and ultimately, cirrhosis. NASH is a chronic condition that gradually worsens over time, and MRI analysis could reveal early indicators of onset and progression as well as biomarkers that demonstrate the extent to which a therapy is preventing the disease from worsening.</p>



<p>“The idea is to put the disease on a spectrum, so that you can look for these different points of intervention,” explains Nioi. He notes that this approach has already led to some promising drug targets, and that his company is now looking to apply a similar approach to neurological disease.</p>



<p>Several other companies are using machine learning–based image analysis to go deeper into their analyses of disease pathology. For example, cancer immunotherapies can vary widely in their efficacy because of differences in the structure and cellular composition of the “microenvironment” within a tumor, including the strength of the local immune response.</p>



<p>“We can now apply computer vision to identify the spatial condition of the tumor microenvironment,” asserts Hamda. “We are getting huge amounts of omics data at the cellular level, and we can characterize the microenvironment of a tumor cell by applying deep neural networks.” This kind of approach is proving valuable in analyses of the extensive (and publicly available) datasets that are being generated by the Human Cell Atlas Project, The Cancer Genome Atlas Project, and other initiatives.</p>



<p>Progress has been slower in terms of applying AI for the actual design of drugs themselves, but machine learning continues to be explored as a means of improving the performance of existing drug candidates. “Small-molecule drugs are a multivariate optimization problem, and humans are not very good at doing that,” observes Henstock. His team is working with self-training algorithms that can tweak such compounds based on a variety of physicochemical criteria, and he believes a similar approach should also be suitable for antibody drugs—a class of proteins for which the structural and biochemical features are particularly well defined.</p>



<p>Alnylam is also using machine learning to fine-tune its therapeutics, which are based on chemically modified RNA sequences that directly interfere with the expression of disease-related genes. “We’re thinking about designing molecules that optimally knock down a target—[and] only the target that you’re interested in—without broad effects on the transcriptome,” says Nioi.</p>



<h4 class="wp-block-heading"><strong>AI on the rise</strong></h4>



<p>After 20 years at Pfizer, Henstock is seeing unprecedented enthusiasm around making AI a core part of the company’s processes. “I ran an AI course three years ago … that seemed to change a lot of the conversations,” he recalls. “We had executive meetings that actually put themselves on hold so they could attend this session.”</p>



<p>A similar transition is playing out elsewhere. At Astellas, for example, investments in AI are extensive. “There are so many big AI or machine learning initiatives here,” says Hamda, who began working with Astellas as a research fellow in 2020.</p>



<p>It takes skilled experts who know their way around the dense forest of available algorithms to make the most of these capabilities. Hamda notes that choosing the wrong computational approach for a given research question can be disastrous in terms of wasted time and resources.</p>



<p>“I fear that some companies are developing ‘black box’ software,” he continues. “[It’s possible that] people are just entering input and collecting output without knowing what’s going on inside.” He emphasizes the importance of planning and building explainable and reproducible computational workflows that allow the users to trust the quality of the resulting models.</p>



<p>Although many machine learning techniques are generalizable across fields, effective implementation in the context of drug discovery also requires deep expertise in the relevant scientific disciplines. Henstock recalls starting at Pfizer as an engineer with a doctorate in AI.</p>



<p>“I couldn’t understand the chemists or the biologists—they speak their own language and have their own concepts,” he says. “And if you can’t understand what they’re trying to get at, you can’t really do your job very well.” This disconnect motivated him to return to school for a biology degree.</p>



<p>Building out such capacity is costly and labor intensive, and some companies are opting for a hybrid model in which some AI-oriented projects are contracted out to smaller startups for which computational biology is the primary focus. For example, Alnylam has partnered with a company called Paradigm4 for key aspects of its machine learning–guided drug development efforts. “It’s really down to resources,” declares Nioi. “There are people that do this for a living and spend their entire time focused on one problem, whereas we’re juggling many things at the same time.”</p>



<p>But in the long run, the gains from bringing AI on board could be huge. In a 2020 article, Henstock cited projections indicating that the pharmaceutical industry could boost earnings by more than 45% by making strong investments in AI.<sup>2</sup>&nbsp;“This means making some interesting tradeoffs in how we do science, how we approach problems, and how we approach our processes,” he explains. “But it’s kind of critical because you can do better experiments with greater richness.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/">AI in Drug Discovery Starts to Live Up to the Hype</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>HOW CAN MACHINE LEARNING ACCELERATE THE PACE OF DRUG DISCOVERY?</title>
		<link>https://www.aiuniverse.xyz/how-can-machine-learning-accelerate-the-pace-of-drug-discovery/</link>
					<comments>https://www.aiuniverse.xyz/how-can-machine-learning-accelerate-the-pace-of-drug-discovery/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 17 Mar 2021 06:16:43 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[accelerate]]></category>
		<category><![CDATA[discovery]]></category>
		<category><![CDATA[DRUG]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[technique]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13553</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ The new ML technique quickly calculates the binding affinities between drug candidates and their targets. Artificial intelligence and machine learning techniques are already proving effective in <a class="read-more-link" href="https://www.aiuniverse.xyz/how-can-machine-learning-accelerate-the-pace-of-drug-discovery/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-machine-learning-accelerate-the-pace-of-drug-discovery/">HOW CAN MACHINE LEARNING ACCELERATE THE PACE OF DRUG DISCOVERY?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading"><strong>The new ML technique quickly calculates the binding affinities between drug candidates and their targets.</strong></h2>



<p>Artificial intelligence and machine learning techniques are already proving effective in pharmaceutical procedures. Drug discovery is one of the crucial procedures to find new candidate medications in the field of medicine, biotechnology and pharmacology. According to the U.S. FDA, there are five steps for the development of a new drug. These include discovery and development, preclinical research, clinical research, FDA review, and FDA post-market safety monitoring. Since drug discovery requires huge amounts of data and research, many pharmaceutical companies are embracing AI and machine learning to accelerate the pace of drug discovery.</p>



<p>AI and ML techniques can also lower the costs of drug development. Drug discovery is a data-driven process. It involves a voluminous amount of data such as high-resolution medical images, genomic profiles, metabolites, molecular structures, and biological information. Machine learning and deep learning-fuelled artificial intelligence can correlate, integrate, and connect existing data more rapidly to help discover patterns in the data pools.</p>



<p>As drugs can only work based on their stickiness to their target proteins in the body, analyzing that stickiness is a key hurdle in the drug discovery and screening process. New research combining chemistry and machine learning could lower that hurdle. The new technique, called DeepBAR, can quickly calculate the binding affinities between drug candidates and their targets. DeepBAR combines traditional chemistry calculations with recent advances in machine learning. It computes binding free energy exactly, but it requires just a fraction of the calculations demanded by previous methods.</p>



<p>The “BAR” in DeepBAR stands for “Bennett acceptance ratio”. It is a decades-old algorithm used in exact calculations of binding free energy. According to the researchers, DeepBAR could one day quicken the pace of drug discovery and protein engineering.</p>



<p>The research has appeared in the Journal of Physical Chemistry Letters and led by Xinqiang Ding, a postdoc in MIT’s Department of Chemistry.</p>



<p>As per the study, using the Bennet acceptance ratio typically requires knowledge of two “endpoint” states. A drug molecule bound to a protein and a drug molecule completely dissociated from a protein, plus knowledge of many intermediate states, e.g., varying levels of partial binding, all of which bog down calculation speed.</p>



<p>The new machine learning technique slashes those in-between states by implementing the Bennett acceptance ratio in machine learning frameworks called deep generative models. These models create a reference state for each endpoint, the bound state and the unbound state, according to Bin Zhang, the Pfizer-Laubach Career Development Professor in Chemistry at MIT, and a co-author of a new paper describing the technique.</p>



<p>In using deep generative models, the researchers were borrowing from the field of computer vision. Though adapting a computer vision approach to chemistry was DeepBAR’s key innovation, the crossover also raised some challenges. “These models were originally developed for 2D images,” says Xinqiang Ding. “But here we have proteins and molecules—it’s really a 3D structure. So, adapting those methods in our case was the biggest technical challenge we had to overcome.”</p>



<p>In tests using small protein-like molecules, DeepBAR calculated binding free energy nearly 50 times faster than previous methods. The researchers then start thinking about using this to do drug screening, particularly in the context of COVID. “DeepBAR has the exact same accuracy as the gold standard, but it’s much faster,” says Zhang. They also believe that in addition to drug screening, DeepBAR could aid protein design and engineering, since the method could be used to model interactions between multiple proteins. They also plan to improve the ability of the new machine learning technique in the future to run calculations for large proteins, a task made feasible by recent advances in computer science.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-machine-learning-accelerate-the-pace-of-drug-discovery/">HOW CAN MACHINE LEARNING ACCELERATE THE PACE OF DRUG DISCOVERY?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-can-machine-learning-accelerate-the-pace-of-drug-discovery/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>IS ARTIFICIAL INTELLIGENCE CLOSE ENOUGH IN UNDERSTANDING OUR BRAIN?</title>
		<link>https://www.aiuniverse.xyz/is-artificial-intelligence-close-enough-in-understanding-our-brain/</link>
					<comments>https://www.aiuniverse.xyz/is-artificial-intelligence-close-enough-in-understanding-our-brain/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 10 Mar 2021 09:46:00 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[brain]]></category>
		<category><![CDATA[discovery]]></category>
		<category><![CDATA[ENOUGH]]></category>
		<category><![CDATA[Understanding]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13370</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ The discovery by a bunch of researchers reveal how AI can now read and interpret our personal choices Artificial Intelligence has been disrupting many <a class="read-more-link" href="https://www.aiuniverse.xyz/is-artificial-intelligence-close-enough-in-understanding-our-brain/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/is-artificial-intelligence-close-enough-in-understanding-our-brain/">IS ARTIFICIAL INTELLIGENCE CLOSE ENOUGH IN UNDERSTANDING OUR BRAIN?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">The discovery by a bunch of researchers reveal how AI can now read and interpret our personal choices</h2>



<p>Artificial Intelligence has been disrupting many industries, business processes, and our lifestyle. With artificial intelligence technology, it is now possible to augment human intelligence and use it in decision-making and customer interactions. The ongoing digital transformation has brought many cutting-edge technologies to the mainstream and stressed the significance of AI and Big Data in revolutionizing industries. The role of artificial intelligence in business has been proved to be positively redefining operations and encouraging cost-efficiency.</p>



<p>But there are still areas connected to AI that researchers are studying to enhance the simulation of human intelligence to an extent, which enables sentiment analysis. Although researchers at the University of Helsinki and the University of Copenhagen have come up with an interesting discovery, wherein AI can read the brainwaves to understand and define subjective notions. In a paper published by these universities, AI can interpret the data generated from a brain-computer interface to build facial images that appeal to or attract different individuals.</p>



<p>A brain-computer interface (BCI), also known as brain-machine interface technology, is a communication system that connects the brain with an external machine or device. A brain-Computer interface is capable of measuring the activity in the Central Nervous System (CNS). This measured brain activity is converted into electronic and software signals that can be interpreted by AI.</p>



<p>Electroencephalography (EEG) and electromyography (EMG) are already in use by doctors to understand the neural activities of our brain and muscles, respectively.</p>



<p>BCI is extensively used in the healthcare and medical fields to treat broken neural connections between our brain and other body parts.</p>



<p>How interesting is it that this technique literally explains the old proverb, ‘beauty is in the brain’? Beauty is in fact inside our brains, which can now be interpreted by some machines and the wide range of AI applications can enable this.</p>



<p>But jokes apart, this study opens up new avenues for artificial intelligence, machine learning, and data analytics and also. According to a Daily Mail report, “The team strapped 30 volunteers to an electroencephalography (EEG) monitor that tracks brain waves, then showed them images of ‘fake’ faces generated from 200,000 real images of celebrities stitched together in different ways.”</p>



<p>The machine learning model called Generative Adversarial Neural Networks was trained to familiarise with individual preferences of faces so that it could easily generate new facial dimensions according to the brainwaves.</p>



<p>A report by Technology Networks revealed that the researchers developed new portraits for each participant, to test the validity of their modeling, and predicted that they will personally find these models attractive. Further, the researchers tested them in a double-blind procedure against matched controls to find that the new images match the preferences of the subjects with an accuracy of over 80%.</p>



<p>Connecting artificial neural networks to our brain can now produce results based on our personal preferences through a non-verbal communication process. This development is new since the neural networks or BCIs couldn’t peek into our personal choices and only establish the pattern of activities.</p>



<p>If it is possible to understand something this unique and personal, AI is not very far from augmenting and understanding the human brain to a more satisfying extent. However, such an invasion of artificial intelligence and technology into the internal structures of our brain will raise concerns about privacy and ethics. This new development will enable the understanding of individual and subjective biases that are internalized deep in our brains. Well, these innovations and developments in the field of AI will aid AI companies in expanding their business avenues and services.</p>
<p>The post <a href="https://www.aiuniverse.xyz/is-artificial-intelligence-close-enough-in-understanding-our-brain/">IS ARTIFICIAL INTELLIGENCE CLOSE ENOUGH IN UNDERSTANDING OUR BRAIN?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/is-artificial-intelligence-close-enough-in-understanding-our-brain/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>WHAT IS THE REAL DIFFERENCE BETWEEN DATA SCIENCE AND SOFTWARE ENGINEERING TEAMS?</title>
		<link>https://www.aiuniverse.xyz/what-is-the-real-difference-between-data-science-and-software-engineering-teams/</link>
					<comments>https://www.aiuniverse.xyz/what-is-the-real-difference-between-data-science-and-software-engineering-teams/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 May 2019 05:33:18 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Behavior]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[discovery]]></category>
		<category><![CDATA[ENGINEERING]]></category>
		<category><![CDATA[projects]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3501</guid>

					<description><![CDATA[<p>Source:- dataconomy.com Why Understanding the Key Differences Between Data Science  and Software Development Matters As Data Science  becomes a critical value driver for organizations of all sizes, business <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-the-real-difference-between-data-science-and-software-engineering-teams/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-the-real-difference-between-data-science-and-software-engineering-teams/">WHAT IS THE REAL DIFFERENCE BETWEEN DATA SCIENCE AND SOFTWARE ENGINEERING TEAMS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- dataconomy.com</p>
<p><strong>Why Understanding the Key Differences Between Data Science  and Software Development Matters</strong></p>
<p>As Data Science  becomes a critical value driver for organizations of all sizes, business leaders who depend on both Data Science  and Software Development teams need to know how the two differ and how they should work together. Although there are lots of similarities across Software Development  and Data Science , they also have three main differences: processes, tooling and behavior. In practice, IT teams are typically responsible for enabling Data Science teams with infrastructure and tools. Because Data Science  looks similar to Software Development (they both involve writing code, right?), many IT leaders with the best intentions approach this problem with misguided assumptions, and ultimately undermine the Data Science teams they are trying to support.</p>
<p><strong>Data Science  != Software Engineering</strong></p>
<p><strong>I. Process</strong></p>
<p>Software engineering has well established methodologies for tracking progress such as agile points and burndown charts. Thus, managers can predict and control the process by using clearly defined metrics. Data Science  is different as research is more exploratory in nature. Data Science projects have goals such as building a model that predicts something, but like a research process, the desired end state isn’t known up front. This means Data Science  projects do not progress linearly through a lifecycle. There isn’t an agreed upon lifecycle definition for Data Science work and each organization uses its own. It would be hard for a research lab to predict the timing of a breakthrough drug discovery. In the same way, the inherent uncertainty of research makes it hard to track progress and predict the completion of Data Science  projects.</p>
<p>The second unique aspect of Data Science  work process is the concept of hit rate, which is the percentage of models actually being deployed and used by the business. Models created by Data Scientists are similar to leads in a sales funnel in the sense that only a portion of them will materialize. A team with 100 percent reliability is probably being too conservative and not taking on enough audacious projects. Alternatively, an unreliable team will rarely have meaningful impact from their projects. Even when a model didn’t get used by the business, it doesn’t mean it’s a waste of work or the model is bad. Like a good research team, Data Science  teams learn from their mistakes and document insights in searchable knowledge management systems. This is very different from Software Development where the intention is to put all the development to use in specific projects.</p>
<p>The third key difference in the model development process is the level of integration with other parts of the organization. Engineering is usually able to operate somewhat independently from other parts of the business. Engineering’s priorities are certainly aligned with other departments, but they generally don’t need to interact with marketing, finance or HR on a daily basis. In fact, the entire discipline of product management exists to help facilitate these conversations and translate needs and requirements. In contrast, a Data Science  team is most effective when it works closely with the business units who will use their models or analyses. Thus, Data Science team needs to organize themselves effectively to enable seamless, frequent cross-organization communication to iterate on model effectiveness. For example, to help business stakeholders collaborate on in-flight Data Science projects, it’s critical that Data Scientists have easy ways of sharing results with business users.</p>
<p><strong>II. Tools and Infrastructure</strong></p>
<p>There is a tremendous amount of innovation in the Data Science  open source ecosystem, including vibrant communities around R and Python, commercial packages like H2O and SAS, and rapidly advancing deep learning tools like TensorFlow that leverage powerful GPUs. Data Scientists should be able to easily test new packages and techniques, without IT bottlenecks or risking destabilizing the systems that their colleagues rely on. They need easy access to different languages so they can choose the right tool for the job. And they shouldn’t have to use different environments or silos when they switch languages. Although it is preferable to allow greater tool flexibility at the experimentation stage, once the project goes into deployment stage, higher technical validation bars and joint efforts with IT become key to success.</p>
<p>On the infrastructure front, Data Scientists should be able to access large machines, specialized hardware for running experiments or doing exploratory analysis. They need to be able to easily use burst/elastic compute on demand, with minimal DevOps help. The infrastructure demands of Data Science  teams are also very different from those of engineering teams. For a data scientist, memory and CPU can be a bottleneck on their progress because much of their work involves computationally intensive experiments. For example, it can take 30 minutes to write code for an experiment that would take 8 hours to run on a laptop. Furthermore, compute capacity needs aren’t constant over the course of a Data Science  project, with burst compute consumption being the norm rather than the exception. Many Data Science techniques utilize large machines by parallelizing work across cores or loading more data into the memory.</p>
<p><strong>III. Behavior </strong></p>
<p>With software, there is a notion of a correct answer and prescribed functionality, which means it’s possible to write tests that verify the intended behavior. This doesn’t hold for Data Science  work, because there is no “right” answer, only better or worse ones. Oftentimes, we’ll hear Data Scientists discuss how they are responsible for building a model as a product or making a slew of models that build on each other that impact business strategy. Unlike statistical models which assume that the distribution of data will remain the same, the distribution of data in machine learning are probabilistic, not deterministic. As a result, they drift and need constant feedback from end users. Data Science  managers often act as a bridge to the business lines and are focused on the quality and pace of the output. Evaluating the model and detecting distribution drift enables people to identify when to retrain the model. Rather than writing unit tests like software engineers, Data Scientists inspect outputs, then obtain feedback from business stakeholders to gauge the performance of their models. Effective models need to be constantly retrained to stay relevant as opposed to a “set it and forget it” workflow.</p>
<p><strong>Final Thoughts</strong></p>
<p>In general, there are several good practices for Data Scientists to learn from Software Development , but there are also some key differences to keep top of mind. The rigor and discipline that modern Software Development  has created is great and should be emulated where appropriate, but we must also realize that what Data Scientists build is fundamentally different from software engineers. Software Development and Data Science processes often intersect as software captures much of the data used by Data Scientists as well as serving as the “delivery vehicle” for many models. So the two disciplines, while distinct, should work alongside each other to ultimately drive business value. Understanding the fundamental nature of Data Science  work can set a solid foundation for companies to build value-added Data Science  teams with the support of senior leadership and IT team.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-the-real-difference-between-data-science-and-software-engineering-teams/">WHAT IS THE REAL DIFFERENCE BETWEEN DATA SCIENCE AND SOFTWARE ENGINEERING TEAMS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-the-real-difference-between-data-science-and-software-engineering-teams/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
