<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Hype Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/hype/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/hype/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 03 Apr 2021 06:38:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>AI in Drug Discovery Starts to Live Up to the Hype</title>
		<link>https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/</link>
					<comments>https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 03 Apr 2021 06:38:07 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[discovery]]></category>
		<category><![CDATA[DRUG]]></category>
		<category><![CDATA[Hype]]></category>
		<category><![CDATA[Starts]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13908</guid>

					<description><![CDATA[<p>Source &#8211; https://www.genengnews.com/ The past few years have seen several flashy demonstrations of how artificial intelligence (AI) algorithms may transform biomedical research, particularly with respect to drug <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/">AI in Drug Discovery Starts to Live Up to the Hype</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.genengnews.com/</p>



<p>The past few years have seen several flashy demonstrations of how artificial intelligence (AI) algorithms may transform biomedical research, particularly with respect to drug discovery. This past November, for example, Google’s AI subsidiary, DeepMind, announced that its AlphaFold program could deliver computational predictions of protein structure that approach the quality of those provided by gold-standard experimental techniques such as X-ray crystallography.<sup>1</sup></p>



<p>Such high-profile announcements have elicited justifiable excitement about the future of algorithmically guided drug development, but AI’s champions in the industry remain wary about overselling the technology’s current capabilities. “I still feel that there’s a lot of hype around it,” says Paul Nioi, PhD, senior director of research at Alnylam Pharmaceuticals. “Companies are springing up that claim to solve all the issues of drug discovery, target discovery, and development using AI. I think that’s yet to be proven.”</p>



<p>Nevertheless, a growing number of companies now recognize the value that AI—and more specifically, the subset of algorithmic techniques known as “machine learning”—can deliver at various stages in the drug discovery process. “There’s an increase in investment across all of the companies that I’ve talked to,” relates Peter Henstock, PhD, machine learning and AI lead at Pfizer.</p>



<p>“The capabilities are still being sorted out,” Henstock adds. “In most cases, we’re still negotiating how to go about using it effectively.” But the opportunities are clear, and machine learning–based techniques are already finding a place in early-stage target discovery and drug development workflows—and offering a glimpse of the gains in efficiency and success rates that the future could bring.</p>



<h4 class="wp-block-heading"><strong>A deep dive into the literature</strong></h4>



<p>The vast majority of biomedical data are imprisoned in unstructured formats that are, in their raw form, inaccessible to computational analysis. Data are trapped within publications, patents, clinical records, and other documents that are exclusively targeted at human readers. Natural language processing (NLP) algorithms offer a powerful solution to this problem. These employ a machine learning technique known as deep learning to analyze documents and other datasets and identify biologically relevant text elements such as the names of genes, proteins, drugs, or clinical manifestations of disease.</p>



<p>NLP algorithms can rapidly comb through vast collections of data and identify previously overlooked patterns and relationships that might be relevant to a disease’s etiology and pathology. Henstock’s team used such an approach to scrutinize PubMed’s tens of millions of abstracts. “Just by text mining,” he points out, “we could basically take some genes and figure out what diseases might be related to them.”</p>



<p>The Pfizer group subsequently incorporated other dimensions into its analysis, looking at publication patterns to identify “trending” areas of disease research where rapid scientific progress might offer a solid foundation for rapidly shepherding a drug development program into the clinic. According to Henstock, this approach achieved a greater than 70% success rate in identifying patterns of heightened disease research activity that ultimately gave rise to clinical trials.</p>



<p>The value of NLP-mined data can be greatly amplified by threading together structured data from multiple sources into a densely interconnected “knowledge graph.” “We have huge amounts of data in different areas, like omics—chemical-related, drug-related, and disease-related data,” says Natnael Hamda, an AI specialist at Astellas Pharma. “Ingesting that data and integrating those complex networks of biological or chemical information into one collection is tricky.”</p>



<p>The rewards, however, can be considerable, as the resulting interconnections tell a much richer biological story than any one dataset on its own. For example, these interconnections can enable more sophisticated predictions of how to develop therapeutic agents that safely and effectively target a particular medical condition.</p>



<p>AI-focused startup Healx relies heavily on knowledge graphs assembled from a diverse range of public and proprietary sources to gain new insights into rare genetic diseases. The company’s chief scientific officer, Neil Thompson, PhD, notes that this category encompasses roughly 7,000 diseases affecting on the order of 400 million patients in total—but treatments are available for only 5% of these disorders.</p>



<p>“Our main focus is on finding new uses for old drugs,” says Thompson, “We are working with all the data on the 4,000 FDA-registered drugs, and we are building data on drugs registered elsewhere in the world.” This information is complemented by data provided by the various patient organizations with which Healx collaborates.</p>



<p>According to Thompson, this approach has yielded an excellent success rate, with multiple disease programs yielding candidates that demonstrated efficacy in animal models. One of these programs, a treatment for the intellectual developmental disorder fragile X syndrome, is on track to enter clinical trials later this year.</p>



<h4 class="wp-block-heading"><strong>Getting a clearer picture</strong></h4>



<p>Machine learning is also effective for extracting interesting and informative features from image data. Alnylam has been using computer vision algorithms to profile vast repositories of magnetic resonance imaging (MRI) data collected from various parts of the body in tens of thousands of patients with various medical conditions. “We’re training a model based on people that we know have a certain disease or don’t have a certain disease,” says Nioi, “and we’re asking the model to differentiate those two categories based on features it can pick up.”</p>



<p>One of the lead indications for this approach is nonalcoholic steatohepatitis (NASH), a hard-to-treat condition in which fat accumulation in the liver contributes to inflammation and scarring—and ultimately, cirrhosis. NASH is a chronic condition that gradually worsens over time, and MRI analysis could reveal early indicators of onset and progression as well as biomarkers that demonstrate the extent to which a therapy is preventing the disease from worsening.</p>



<p>“The idea is to put the disease on a spectrum, so that you can look for these different points of intervention,” explains Nioi. He notes that this approach has already led to some promising drug targets, and that his company is now looking to apply a similar approach to neurological disease.</p>



<p>Several other companies are using machine learning–based image analysis to go deeper into their analyses of disease pathology. For example, cancer immunotherapies can vary widely in their efficacy because of differences in the structure and cellular composition of the “microenvironment” within a tumor, including the strength of the local immune response.</p>



<p>“We can now apply computer vision to identify the spatial condition of the tumor microenvironment,” asserts Hamda. “We are getting huge amounts of omics data at the cellular level, and we can characterize the microenvironment of a tumor cell by applying deep neural networks.” This kind of approach is proving valuable in analyses of the extensive (and publicly available) datasets that are being generated by the Human Cell Atlas Project, The Cancer Genome Atlas Project, and other initiatives.</p>



<p>Progress has been slower in terms of applying AI for the actual design of drugs themselves, but machine learning continues to be explored as a means of improving the performance of existing drug candidates. “Small-molecule drugs are a multivariate optimization problem, and humans are not very good at doing that,” observes Henstock. His team is working with self-training algorithms that can tweak such compounds based on a variety of physicochemical criteria, and he believes a similar approach should also be suitable for antibody drugs—a class of proteins for which the structural and biochemical features are particularly well defined.</p>



<p>Alnylam is also using machine learning to fine-tune its therapeutics, which are based on chemically modified RNA sequences that directly interfere with the expression of disease-related genes. “We’re thinking about designing molecules that optimally knock down a target—[and] only the target that you’re interested in—without broad effects on the transcriptome,” says Nioi.</p>



<h4 class="wp-block-heading"><strong>AI on the rise</strong></h4>



<p>After 20 years at Pfizer, Henstock is seeing unprecedented enthusiasm around making AI a core part of the company’s processes. “I ran an AI course three years ago … that seemed to change a lot of the conversations,” he recalls. “We had executive meetings that actually put themselves on hold so they could attend this session.”</p>



<p>A similar transition is playing out elsewhere. At Astellas, for example, investments in AI are extensive. “There are so many big AI or machine learning initiatives here,” says Hamda, who began working with Astellas as a research fellow in 2020.</p>



<p>It takes skilled experts who know their way around the dense forest of available algorithms to make the most of these capabilities. Hamda notes that choosing the wrong computational approach for a given research question can be disastrous in terms of wasted time and resources.</p>



<p>“I fear that some companies are developing ‘black box’ software,” he continues. “[It’s possible that] people are just entering input and collecting output without knowing what’s going on inside.” He emphasizes the importance of planning and building explainable and reproducible computational workflows that allow the users to trust the quality of the resulting models.</p>



<p>Although many machine learning techniques are generalizable across fields, effective implementation in the context of drug discovery also requires deep expertise in the relevant scientific disciplines. Henstock recalls starting at Pfizer as an engineer with a doctorate in AI.</p>



<p>“I couldn’t understand the chemists or the biologists—they speak their own language and have their own concepts,” he says. “And if you can’t understand what they’re trying to get at, you can’t really do your job very well.” This disconnect motivated him to return to school for a biology degree.</p>



<p>Building out such capacity is costly and labor intensive, and some companies are opting for a hybrid model in which some AI-oriented projects are contracted out to smaller startups for which computational biology is the primary focus. For example, Alnylam has partnered with a company called Paradigm4 for key aspects of its machine learning–guided drug development efforts. “It’s really down to resources,” declares Nioi. “There are people that do this for a living and spend their entire time focused on one problem, whereas we’re juggling many things at the same time.”</p>



<p>But in the long run, the gains from bringing AI on board could be huge. In a 2020 article, Henstock cited projections indicating that the pharmaceutical industry could boost earnings by more than 45% by making strong investments in AI.<sup>2</sup>&nbsp;“This means making some interesting tradeoffs in how we do science, how we approach problems, and how we approach our processes,” he explains. “But it’s kind of critical because you can do better experiments with greater richness.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/">AI in Drug Discovery Starts to Live Up to the Hype</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-in-drug-discovery-starts-to-live-up-to-the-hype/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Antidote To The Hype, Noise, And Spin Of Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/the-antidote-to-the-hype-noise-and-spin-of-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/the-antidote-to-the-hype-noise-and-spin-of-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 26 Mar 2021 06:24:27 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Antidote]]></category>
		<category><![CDATA[Hype]]></category>
		<category><![CDATA[Noise]]></category>
		<category><![CDATA[Spin]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13804</guid>

					<description><![CDATA[<p>Source &#8211; https://www.forbes.com/ What went wrong with artificial intelligence? This transformative technology was supposed to change everything. I’ve seen first-hand the incredible potential it has—both as a <a class="read-more-link" href="https://www.aiuniverse.xyz/the-antidote-to-the-hype-noise-and-spin-of-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-antidote-to-the-hype-noise-and-spin-of-artificial-intelligence/">The Antidote To The Hype, Noise, And Spin Of Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.forbes.com/</p>



<p>What went wrong with artificial intelligence? This transformative technology was supposed to change everything. I’ve seen first-hand the incredible potential it has—both as a professor of computer science at the University of Michigan and as the founder of Clinc, ZeroShotBot, Myca.ai, a non-profit called ImpactfulAI, and several other AI-focused companies.</p>



<p>So, why has it devolved into overhyped solutions, marketing noise, and an endless spin of the same, tired ideas? Into poor user experiences, embarrassing bugs, and countless other misfires?</p>



<p>The answer is pretty clear when you consider how every business has been told it needs artificial intelligence to stay competitive. This mad dash is symbolic of the gold rush, as companies push and pull to be early adopters—to scrape every last dollar out of their ROI. Add to that the misconceptions about what it can do, the ebb and flow of innovation vs. standard techniques, the grandiose promises, the marketability of AI, and it becomes clear how we got here.</p>



<p>It makes me sad to see AI reduced to a gimmick. To be clear, I’m not saying AI doesn’t have an important role to play. It will define the future of technology in many ways. The challenge is looking beyond the noise.</p>



<p>That’s why I’m here to discuss the antidote. The four mental models I believe any business, decision-maker, or tech enthusiast interested in AI must take to see past all the hype, noise, and spin.</p>



<h2 class="wp-block-heading">Antidote 1 &#8211; You know it when you see it. Less talk, more show.</h2>



<p>What’s the most important rule of AI? Don’t believe it unless you can see and feel it.</p>



<p>Why do I think this is the most important mental model? The magic of AI still exists, there are places where innovation still occurs, and when it does, the results are undeniable. Having said that, you can’t escape the noise, the hype, the big promises.</p>



<p>Simple, purpose-built AI solutions have transformed many industries. AI is being used in healthcare to detect breast cancer, in agriculture for crop yield forecasting, in autonomous driving to improve safety. These solutions use deep learning and reasoning to draw conclusions from billions of analyzed pixels. There’s no denying these use cases. They’re clear as you can actually see it in action and see it working well. </p>



<p>Trusting this type of intuition must be applied in all realms.&nbsp;</p>



<p>Throughout my experience creating novel conversational AI technologies, I know the power of an unforgettable experience. When it’s real, you know it. It only take a few minutes of interaction to tell if another human is intelligent, and similarly, you know right away if a conversational AI is intelligent from actually interacting with it. You have to look past the canned experiences, the lofty promises, and see what AI looks like in practice—within your industry or use case.</p>



<p>And if something sounds fake or unbelievable? It probably is. Trust your senses, they will guide you through the noise.</p>



<h2 class="wp-block-heading">Antidote 2 &#8211; You will have trouble with certain solutions: the training dilemma.</h2>



<p>Maybe you beat the odds and found that perfect AI solution. It can happen, right? Take a step back and think about the bigger picture. How will you apply that solution to your needs?</p>



<p>A promising demo isn’t everything. You still have to adapt that AI for your use case, train it, deploy it, and improve it. The more niche and customized your use case is, the harder it will be to realize the AI quality demo’d into reality in your environment. When the quality of your AI requires specific training to your use case, production-grade AI is extremely complex and often requires a dedicated team of experts in machine learning, computer and data science, and training specialists. Each layer adds more complexity, making your solution more expensive, brittle, and likely to fail.</p>



<p>As chronicled through my journey as CEO of Clinc, I saw countless companies spend millions trying to create, configure, and train virtual assistants, only to fail. The learning curve is steeper than ever, and the stakes are even higher.</p>



<p>So, how can you successfully navigate the world of AI? It starts with asking the right questions, things like:</p>



<ul class="wp-block-list"><li>Ok, this AI is good, but can I wield it?</li><li>How much customization does it require to solve my problems?</li><li>Will I have to actually train the inner models in the process of tuning this solution?</li></ul>



<p>And even if you know the answers to these questions, that same demo experience you saw may be untenable if you have to train the AI yourself.</p>



<p>You must be reasonable about the logistics of making AI possible. Be ready for these costs: engineers to work it, support to keep it running, and training specialists (data scientists / ML experts) to improve it.</p>



<p>Next, ask yourself how it ties into mission criticality. Can you afford for it to fail? What’s at stake if your AI spectacularly fails? What will happen if you change the model’s task?</p>



<p>AI is some of the most complex technology on the planet. Getting it right means defining your expectations and knowing your limitations.</p>



<h2 class="wp-block-heading">Antidote 3 &#8211; The revolution only applies to certain types of problems.</h2>



<p>Let me start by saying we are already in an AI revolution, thanks to advancements in deep learning, which uses data to model the way our brain’s neural network works. The catalysts for this initial success include the availability of data, advancements in deep learning models, and innovations in computing.</p>



<p>Despite this, not all AI problems can be solved by advancements in neural networks. Many companies may claim to use next-generation AI, but more often than not, it’s just noise in the AI hype cycle.</p>



<p>Here’s what I can tell you. The biggest advancements are occurring in areas where they use deep learning techniques and data to train a system, such as in Natural Language Processing (NLP) or computer vision.</p>



<p>Think about it like this. If we see large amounts of data being used to extract patterns, that’s a direct representation of the AI revolution. This type of approach being the basis of new products like Myca.ai is where AI is leveraged in a transformational way.</p>



<p>So, where are things going wrong? Most companies are using old techniques to latch onto the AI hype cycle. Think about early chatbots and the frustrating user experiences they offered. These solutions used the old Stanford NLP library and similar classical computational linguistic approach that leveraged grammar, nouns, synonyms, dictionaries, and other linguistic mechanics to derive patterns.</p>



<p>The problem? This is the wrong approach in modern times. You can’t expect to innovate if you rely on antiquated techniques.</p>



<p>Now for the big question: how can you see through the noise and see if an AI solution is legitimate? I recommend you learn a selection of latest buzzwords to see if they apply to a given technology.</p>



<p>If they use computational linguistics, regression models, or decision trees, it’s antiquated.&nbsp;</p>



<p>If they use neural networks, transfer learning, adversarial networks, or attention models, it’s current.</p>



<p>You don’t need to understand how they work theoretically. Your focus is knowing what buzzwords to spot and inform yourself of trends through projects like ImpactfulAI. Look for things like convolutional neural networks, transformers, attention models, GANs to quickly identify if the underlying technology is part of the AI revolution.</p>



<h2 class="wp-block-heading">The Future &#8211; Zero Shot / One Shot / Few Shot Artificial Intelligence.</h2>



<p>2020 was a milestone year for the scientific community. A little something called Generative Pre-trained Transformer 3 (GPT-3) was developed by the OpenAI project. The language model it represents is based on 175 billion parameters and is more accurate than anything we’ve ever seen. For context, the older GPT-2 used 1.5 billion parameters.</p>



<p>This model was inspired by recent work on transfer learning, which was first popularized by the Bidirectional Encoder from Transformers (BERT) model, and is built on the belief that you can train an AI model really well once with massive amounts of data (say the entire internet) then use significantly less or no training data for a new task.</p>



<p>This transformational work popularized a new philosophy toward deep learning models as “few-shot learners,” “one-shot learning” and “zero-shot learning,” meaning only a few, a single, or no training examples are needed for the model to perform a completely new task. </p>



<p>Let that sink in for a moment. For the first time, we may be able to create a conversational AI for new types of problems without any training. With the AI philosophy introduced by GPT-3, you can ask any question and receive an incredibly accurate answer without the need of training. One of my next major endeavors is to introduce the first commercialization of such an approach through the development of Zero Shot Bot  to revolutionize the conversational AI chatbot space, and the performance of this technology is breath taking. </p>



<p>I firmly believe that GPT-3 and now Zero Shot Bot serves as a bell weather of the next big game-changer for the next decade. It’s not a matter of if, but when. This Zero Shot to Few Shot approach is the answer to the user experience problems created by other antiquated AI technologies. In the context of the Internet, it can solve a host of interesting problems and doesn’t require any training.</p>



<p>And Microsoft agrees. They signed a staggering $1 billion licensing agreement with OpenAI largely due to how impressive this model is.</p>



<p>This philosophy is now in the air as of just a year ago. Beyond GPT-3 and Zero Shot Bot, no products exist yet to my knowledge. But mark my words, industry-changing technology and commercializations of this Zero Shot approach is coming.&nbsp;</p>



<p>Zero Shot Bot and other platforms that use zero-shot learning removes the hardest part about deep learning AI—the training.&nbsp;</p>



<p>If you ask me, that’s the magic spark of innovation that commercial AI has been missing for years.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-antidote-to-the-hype-noise-and-spin-of-artificial-intelligence/">The Antidote To The Hype, Noise, And Spin Of Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-antidote-to-the-hype-noise-and-spin-of-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
