<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Pandemic Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/pandemic/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/pandemic/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 10 Jun 2021 05:21:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>The Pandemic And Its Implications On Industrial Machine Learning</title>
		<link>https://www.aiuniverse.xyz/the-pandemic-and-its-implications-on-industrial-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/the-pandemic-and-its-implications-on-industrial-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 10 Jun 2021 05:21:03 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Implications]]></category>
		<category><![CDATA[Industrial]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[moment]]></category>
		<category><![CDATA[Pandemic]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14148</guid>

					<description><![CDATA[<p>Source &#8211; https://www.forbes.com/ For a moment, let’s set aside the abject tragedy of the Covid-19 pandemic and the demoralizing conditions through which the world continues to persevere. <a class="read-more-link" href="https://www.aiuniverse.xyz/the-pandemic-and-its-implications-on-industrial-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-pandemic-and-its-implications-on-industrial-machine-learning/">The Pandemic And Its Implications On Industrial Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.forbes.com/</p>



<p>For a moment, let’s set aside the abject tragedy of the Covid-19 pandemic and the demoralizing conditions through which the world continues to persevere. Instead, let’s examine the state of affairs from a dispassionate and scientific position. Seismic changes in behavior are erupting as the burden of the pandemic forces transformation. Crippling inefficiencies in industry and volatile projections of markets have led to unprecedented uncertainty.</p>



<p>To fully address any one of the challenges the world now faces would exhaust this medium. However, the role machine learning (ML) will play in people’s lives is fascinating and worthy of discussion, as the field will undoubtedly contribute to the impending metamorphosis. There are three periods of time to consider: the time before the pandemic (BP), the time during the pandemic (DP) and the time after the pandemic (AP). Central to these three epochs are the two approaches to model human behavior through machine learning.</p>



<p>Canonical ML (CML), the first class of interest, represents traditional approaches in pattern recognition, derived from highly structured and labeled data through computational statistics. This approach is used to explain the state of a system or to predict behaviors. Its success often depends on engaged scientists to explain and interpret the model’s ascribed result. CML could be used to predict the weather in a particular region by analyzing the features of the region over time, where each of those features in isolation would fail to fully predict a future state. When CML models are contextualized, you’re able to explain how predictions are made. For example, you can predict rain in Austin tomorrow given the regression and ensemble models you’ve developed for various weather features in aggregate.</p>



<p>I’ll call the second class of techniques reinforcement ML (RML) as it deploys a fundamentally different modeling paradigm: In RML, models self-adjust their individual actions to optimize a collective outcome. These models operate much more autonomously than CML and fully embrace early failures in favor of long-term gains through repeated environment exploration and self-learning. Examples of RML include applications in autonomous driving, gaming, computer vision and even natural language processing. Only recently has RML become tenable for industrial deployments and is still met with much trepidation because of its unexplainable methods and unclear accountability. In other words, when RML models are correct, it is hard to trace what led to that specific output. CML outputs, on the other hand, often are easier to explain. Throughout my career, the applications of CML have represented the overwhelming majority of successful solutions, while RML, in its fledgling state, has only begun to transform the industrial world.</p>



<p>Qualcomm Highlights Mobile Audio With Snapdragon Sound</p>



<p>In 2019 (BP), CML reached its zenith. Nearly every industry was disrupted to some degree by machine learning. From financial services to healthcare to defense, leaders embraced the capabilities of a robust data science solution capitalizing on CML, often materialized into what is commonly known as “deep learning.” Scores of historical data and behavioral modeling contributed to measurable ROI on data science initiatives and applications. Many companies had deployed CML, and some had begun to experiment with RML. Companies around the world were transforming their industries through CML on their own terms. On the surface, the union between science and industry was thriving.</p>



<p>Then the pandemic enveloped the planet. Overnight, the pandemic obliterated the utility of millions of models. Every sector that had benefited from CML was in a difficult position: Companies had to either trust that the ML models their businesses depended on would correct over time or reverse course and manually drive mission-critical insights for their business. I believe CML has failed many businesses across many industries, and the business world has yet to realize the full effects of these failures.</p>



<p>The companies that adopted RML before the pandemic, however, may have an advantage over their peers, as RML models are not as dependent on finely tuned conditions from a scientist, but rather seek to optimize for success as defined by scientists. While RML requires exorbitant amounts of data for training, the increase in the frequency of data collection has eased that challenge in some cases.</p>



<p>Topically, the post-pandemic era will likely resemble the pre-pandemic era but with a heavier slant to digital behavior, as well as customer behavior based on new habits and efficiencies identified during the pandemic. Once industry realizes advantages gained by the firms that adopted RML before the pandemic, I expect there will be an algorithmic arms race. For example, an RML approach for product recommendations for an online retailer will likely adapt to the wildly new engagement model of the post-pandemic epoch. The advantages over the retailer’s CML pre-pandemic competitors will be decisive. Simply put, I believe RML techniques are far more robust at predicting behavior post-pandemic than techniques using CML. Those that are successful in the adoption will have a higher chance to survive and differentiate from their competition.</p>



<p>But what of the explainability of RML? The pressures of the pandemic will greatly shift industry’s willingness to deploy unexplainable or opaque models, or “black box models” as they’re often referred to. As the advantage for the few through RML becomes clear, many firms will likely forgo the accountability of CML in favor of RML’s adaptability. It is the AI equivalent of the adoption of telehealth or remote work during the pandemic, and is arguably much more impactful. Scientists must now work to ensure RML techniques that are deployed can be responsible and accountable or they might compromise the integrity of their operations.</p>



<p>There are many reasons to be excited for the next frontier of commerce. Industries have evolved their priorities, shifted relationships and in many ways removed tedious operations, such as pattern recognition based on outdated labeled data sets that are relics of former industrial epochs. ML will continue to play an integral role in industrial transformation, and as it adapts to the changes people have made in their own lives, I trust my colleagues and peers across industry to ensure we develop this capability in a way that is inspirational, dynamic and responsible.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-pandemic-and-its-implications-on-industrial-machine-learning/">The Pandemic And Its Implications On Industrial Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-pandemic-and-its-implications-on-industrial-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning for COVID Diagnosis Falls Short</title>
		<link>https://www.aiuniverse.xyz/machine-learning-for-covid-diagnosis-falls-short/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-for-covid-diagnosis-falls-short/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 23 Mar 2021 08:59:07 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Covid]]></category>
		<category><![CDATA[Diagnosis]]></category>
		<category><![CDATA[Falls]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[Short]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13708</guid>

					<description><![CDATA[<p>Source &#8211; https://www.datanami.com/ In the earliest days of the pandemic, machine learning showed exceptional promise for COVID-19 diagnosis. Reliably, early machine learning models outperformed doctors in recognizing <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-for-covid-diagnosis-falls-short/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-for-covid-diagnosis-falls-short/">Machine Learning for COVID Diagnosis Falls Short</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.datanami.com/</p>



<p>In the earliest days of the pandemic, machine learning showed exceptional promise for COVID-19 diagnosis. Reliably, early machine learning models outperformed doctors in recognizing the telltale COVID-induced pneumonia on CT scans from hospitalized patients. However, more conventional testing methods quickly lapped machine learning-based methods, detecting the onset of COVID well before hospitalization and with greater accuracy. Now, a year later, a team of researchers led by the University of Cambridge has concluded a review of COVID diagnosis ML models, finding that even in 2021, none of the proposed models are suitable for clinical use.</p>



<p>The researchers whittled down 2,212 studies, eventually focusing on 62 studies – most of which were not peer-reviewed – published between January 1st and October 3rd of 2020, all of which presented machine learning models for diagnosing or predicting COVID-19 infection based on X-rays and/or CT scans. These 62 studies collectively described more than 300 such models – and the researchers found all of them substantially lacking.</p>



<p>“The international machine learning community went to enormous efforts to tackle the COVID-19 pandemic using machine learning,” said James Rudd, one of the senior authors of the review and a member of Cambridge’s Department of Medicine. “These early studies show promise, but they suffer from a high prevalence of deficiencies in methodology and reporting, with none of the literature we reviewed reaching the threshold of robustness and reproducibility essential to support use in clinical practice.”</p>



<p>The issues were wide-ranging: some studies suffered from poor data quality, while others were not reproducible and yet more exhibited biases in their design. By way of example, the authors pointed out that some of the datasets used to train some of the machine learning models included scans from children. “Since children are far less likely to get COVID-19 than adults, all the machine learning model could usefully do was to tell the difference between children and adults, since including images from children made the model highly biased,” explained Michael Roberts, a member of Cambridge’s Department of Applied Mathematics and Theoretical Physics.&nbsp;</p>



<p>Other datasets were too small, some were poorly labeled. Some models used the same data for training and testing. And, overwhelmingly, the designers of the models failed to meaningfully incorporate input from radiologists and clinicians who might have insight into the real-world implications of the data and diagnoses at hand. “Whether you’re using machine learning to predict the weather or how a disease might progress,” Roberts said, “it’s so important to make sure that different specialists are working together and speaking the same language.”</p>



<p>Better late than never, though, and to that end, the reviewers have some recommendations for machine learning model developers working on COVID diagnosis: know the data you’re working with, especially when it comes to public datasets; work with diverse, large datasets; and, crucially, include better documentation to allow for reproducibility.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-for-covid-diagnosis-falls-short/">Machine Learning for COVID Diagnosis Falls Short</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-for-covid-diagnosis-falls-short/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Houston data science company expands pandemic-inspired research tool</title>
		<link>https://www.aiuniverse.xyz/houston-data-science-company-expands-pandemic-inspired-research-tool/</link>
					<comments>https://www.aiuniverse.xyz/houston-data-science-company-expands-pandemic-inspired-research-tool/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 16 Mar 2021 07:26:16 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[expands]]></category>
		<category><![CDATA[Houston]]></category>
		<category><![CDATA[inspired]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13536</guid>

					<description><![CDATA[<p>Source &#8211; https://houston.innovationmap.com/ Last fall, Houston-based Mercury Data Science released an AI-driven app designed to help researchers unlock COVID-19-related information tucked into biomedical literature. The app simplified <a class="read-more-link" href="https://www.aiuniverse.xyz/houston-data-science-company-expands-pandemic-inspired-research-tool/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/houston-data-science-company-expands-pandemic-inspired-research-tool/">Houston data science company expands pandemic-inspired research tool</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://houston.innovationmap.com/</p>



<p>Last fall, Houston-based Mercury Data Science released an AI-driven app designed to help researchers unlock COVID-19-related information tucked into biomedical literature. The app simplified access to data about subjects like genes, proteins, drugs, and diseases.</p>



<p>Now, a year into the coronavirus pandemic, Mercury Data Science is applying this technology to areas like agricultural biotech, cancer therapeutics, and neuroscience. It&#8217;s an innovation that arose from the pandemic but that promises broader, long-lasting benefits.</p>



<p>Angela Holmes, chief operating officer of Mercury Data Science, says the platform relies on an AI concept known as natural language processing (NLP) to mine scientific literature and deliver real-time results to researchers.</p>



<p>&#8220;We developed this NLP platform as a publicly available app to enable scientists to efficiently discover biological relationships contained in COVID research publications,&#8221; Holmes says.</p>



<p>The platform:</p>



<ul class="wp-block-list"><li>Contains dictionaries with synonyms to identify things like genes and proteins that may go by various names in scientific literature.</li><li>Produces data visualizations of relationships among various biological functions.</li><li>Summarizes the most important data points on a given topic from an array of publications.</li><li>Depends on data architecture to automate how data is retrieved and processed.</li></ul>



<p>In agricultural biotech, the platform enables researchers to sift through literature to dig up data about plant genetics, Holmes says. The lack of gene-naming standards in the world of plants complicates efforts to search data about plant genetics, she says.</p>



<p>The platform&#8217;s ability to easily ferret out information about plant genetics &#8220;allows companies seeking gene-editing targets to make crops more nutritious and more sustainable as the climate changes to have a rapid way to de-risk their genomic analyses by quickly assessing what is already known versus what is unknown,&#8221; Holmes says.</p>



<p>The platform allowed one of Mercury Data Science&#8217;s agricultural biotech customers to comb through scientific literature about plant genetics to support targeted gene editing in a bid to improve crop yields.</p>



<p>In the field of cancer therapeutics and other areas of pharmaceuticals, the platform helps prioritize drug candidates, Holmes says. One of Mercury Data Science&#8217;s customers used the platform to extract data from about 2 terabytes (or 2 trillion bytes) of information to evaluate drug candidates. The information included drug studies, clinical trials, and patents. Armed with that data, Mercury Data Science&#8217;s cancer therapy client signed agreements with new pharmaceutical partners.</p>



<p>The platform also applies to the hunt for biomarkers in neuroscience, including disorders such as depression, anxiety, autism and multiple sclerosis. Data delivered through the platform helps bring new neurobehavioral therapeutics to market, Holmes says.</p>



<p>&#8220;An NLP platform to automatically process newly published literature for more insight on the search for digital biomarkers represents a great opportunity to accelerate research in this area,&#8221; she says.</p>



<p>One of Mercury Data Science&#8217;s customers adopted the platform to improve insights into patients with depression and anxiety in order to improve treatment of those conditions.</p>



<p>The new platform — initially developed as a tool to combat COVID-19 — falls under the startup&#8217;s vast umbrella of artificial intelligence and data science. Founded in 2017, Mercury Data Science emerged because portfolio companies of the Houston-based Mercury Fund were seeking to get a better handle on AI and data science.</p>



<p>Last April, Angela Wilkins, founder, co-CEO and chief technology officer of Mercury Data Science, left the company to lead Rice University&#8217;s Ken Kennedy Institute. Dan Watkins, co-founder and managing director of the Mercury Fund, remains at Mercury Data Science as CEO.</p>



<p>The Ken Kennedy Institute fosters collaborations in computing and data. Wilkins replaced Jan Odegard as executive director of the institute. Odegard now is senior director of industry and academic partnerships at The Ion, the Rice-led innovation hub.</p>



<p>Wilkins &#8220;is an academic at heart with considerable experience working with faculty and students, and an entrepreneur who has helped build a successful technology company,&#8221; Lydia Kavraki, director of the Ken Kennedy Institute, said in a news release announcing Wilkins&#8217; new role. &#8220;Over her career, Angela has worked on data and computing problems in a number of disciplines, including engineering, life sciences, health care, agriculture, policy, technology, and energy.&#8221;</p>



<p>According to a recently released report, a few key industries in Houston have attracted the bulk of the city&#8217;s venture capital investment dollars.</p>



<p>The Houston Tech Report by the Greater Houston Partnership and Houston Exponential has revealed that the city is home to 8,800 tech-related firms, including over 700 venture-backed startups that have attracted over $2.6 billion in VC funding over the past five years. Annual VC investment has tripled in that same timeframe — from $284 million in 2016 to $753 million in 2020.</p>



<p>&#8220;Houston is a city that has been leading the way for decades, with breakthrough innovations that have truly changed the world,&#8221; says Bob Harvey, president and CEO of the Greater Houston Partnership, in a news release. &#8220;Over the past few years, we have been working to transform an already incredible economy into one that competes as a leading digital tech city.&#8221;</p>



<p>Zooming into the industries attracting the most capital in Houston, life sciences and oil and gas technology continue to reign supreme. Of the VC dollars going into Houston companies, 17 percent goes into life science companies and 17 percent goes into oil and gas, according to the report. Cleantech and Oncology are both niches in Houston that have seen growth in VC investment.</p>



<p>Software as a service has seen significant growth since 2011, and represents the third-most invested in industry with 14 percent of the VC investment.</p>



<p>Contributing to the innovation ecosystem&#8217;s growth is an increase in startup development organizations — the city now has added over 30 SDOs including non-profits, incubators/accelerators, coworking spaces and makerspaces since 2017 — and access to tech talent. According to the report, Houston has the 12th largest tech sector in the U.S. with 235,000 tech workers, and this sector generates $28.1 billion to the region&#8217;s GDP.</p>



<p>&#8220;Houston in 2020 had not one but two unicorns (private tech companies exceeding a $1 billion valuation), our first ever,&#8221; says Harvin Moore, president of HX. &#8220;That&#8217;s a reflection of both the rate of growth and early stage of our ecosystem. We will see an increasing number of startups as these companies continue to grow and others follow.&#8221;</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/houston-data-science-company-expands-pandemic-inspired-research-tool/">Houston data science company expands pandemic-inspired research tool</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/houston-data-science-company-expands-pandemic-inspired-research-tool/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic</title>
		<link>https://www.aiuniverse.xyz/asf-keynotes-showcase-how-hpc-and-big-data-have-pervaded-the-pandemic/</link>
					<comments>https://www.aiuniverse.xyz/asf-keynotes-showcase-how-hpc-and-big-data-have-pervaded-the-pandemic/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Feb 2021 06:07:43 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[ASF]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Keynotes]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[Pervaded]]></category>
		<category><![CDATA[Showcase]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13097</guid>

					<description><![CDATA[<p>Source &#8211; https://www.hpcwire.com/ Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the <a class="read-more-link" href="https://www.aiuniverse.xyz/asf-keynotes-showcase-how-hpc-and-big-data-have-pervaded-the-pandemic/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/asf-keynotes-showcase-how-hpc-and-big-data-have-pervaded-the-pandemic/">ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.hpcwire.com/</p>



<p>Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first lockdowns in North America, opened with a session from Ari Berman, CEO of BioTeam.</p>



<p>“It’s so easy to focus on the bad things we hear about the remarkable and really unfortunate numbers of people who have died from this, the huge numbers of people who’ve been infected from it, we talk about these new more infectious variants, et cetera,” Berman said – but, he added, there were major success stories in the pandemic, too: collaborations and technology deployments that will save “millions of lives.”</p>



<p><strong>NIH Keynote: Creating a Coordinated Data Approach to Help Address COVID-19</strong></p>



<p>With that, the roundtable launched into its first keynote, delivered by the National Institutes of Health’s Susan Gregurick, who serves as associate director for data science and director of the NIH’s Office of Data Science Strategy.&nbsp;</p>



<p>“We’ve been working for almost a year now to sprint ahead to collect and enhance SARS-CoV-2 data – clinical data, structural data, genomics data – to address the pandemic,” Gregurick said. “The first thing that we tried to do – and we did successfully – was to get different types of at-home, point-of-care clinical testing technologies out into the hands of our citizens.”</p>



<p>This program – called RADx – ranges from preparing for high-throughput COVID-19 testing to engaging underserved populations through community-engaged implementation projects, and it’s one of several data-driven projects run inside the NIH. The NIH has also, for instance, been working on its Collaboration to Assess Risk and Identify Long-Term Outcomes (CARING) for Children with COVID Program.&nbsp;</p>



<p>Still, the NIH needed to develop a longer reach. They worked with the National COVID Cohort Collaborative (N3C), which integrates electronic healthcare record data on COVID-19, augmenting it with “an incredibly rich set of data from vulnerable populations.” As of a few months ago, the N3C has multiple millions of participants contributing data to hundreds of ongoing projects and collaborators. (The data is accessible in a cloud archive, which is accessible here.) </p>



<p>The NIH also worked with the All of Us Research program – which collects longitudinal COVID-19 health outcome data alongside phenotypic and serological data – the BioData Catalyst, which provides data from clinical trials and observational studies such as those that evaluated hydroxychloroquine early in the pandemic.&nbsp;</p>



<p>Soon enough, the NIH found itself serving as an aggregator of a wide range of data from various sources – and having to grapple with the logistical implications of coordinating both the data and access to the data across a wide range of interested parties.</p>



<p>“Making all this work together across many different projects really does require some efforts in data harmonization,” Gregurick said. “We’ve been tackling this in two different ways: … common data elements and mapping to data models. In some cases it’s a development of curation strategies within the data hub, … in other cases it’s at the point of collection and really collaborating with our data coordination centers.”&nbsp;</p>



<p>The different stages of the RADx program, for instance, shared around 16 common data elements (CDEs) that could be more easily integrated, but each program also contained its own unique elements. “We’re using those common data elements to help construct data models and data search strategies for ontology,” Gregurick said. “We’re also mapping these to a common data model.”&nbsp;</p>



<p>The NIH has also been working on unifying other supporting technology, such as the researcher authentication services that allow access to various data, tools and hubs, across platforms. More ambitiously, they’re piloting a program to allow the linking of records from a given individual across platforms without compromising that individual’s identity.</p>
<p>The post <a href="https://www.aiuniverse.xyz/asf-keynotes-showcase-how-hpc-and-big-data-have-pervaded-the-pandemic/">ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/asf-keynotes-showcase-how-hpc-and-big-data-have-pervaded-the-pandemic/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial intelligence presents a moral dilemma</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-presents-a-moral-dilemma/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-presents-a-moral-dilemma/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 22 Feb 2021 05:52:30 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[dilemma]]></category>
		<category><![CDATA[moral]]></category>
		<category><![CDATA[Outbreak]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[presents]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12982</guid>

					<description><![CDATA[<p>Source &#8211; https://mg.co.za/ Since the outbreak of the pandemic, the world has grown increasingly reliant on artificial intelligence (AI) technologies. Thousands of new innovations — from contact-tracing <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-presents-a-moral-dilemma/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-presents-a-moral-dilemma/">Artificial intelligence presents a moral dilemma</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://mg.co.za/</p>



<p>Since the outbreak of the pandemic, the world has grown increasingly reliant on artificial intelligence (AI) technologies. Thousands of new innovations — from contact-tracing apps to the drones delivering medical equipment — sprang up to help us meet the challenges of Covid-19 and life under lockdown.&nbsp;</p>



<p>The unprecedented speed with which a vaccine for Covid-19 was discovered can partly be attributed to the use of AI algorithms which rapidly crunched the data from thousands of clinical trials, allowing researchers around the world to compare notes in real time.&nbsp;</p>



<p>As Satya Nadella, the chief executive of Microsoft observed, in just two months, the world witnessed a rate of digital transition we’d usually only see in two years. </p>



<p>In 2017, PWC published a study showing that adoption of AI technologies could increase global GDP by 14% by 2030. In addition to creating jobs and boosting economies, AI technologies have the potential to drive sustainable development and even out inequalities, democratising access to healthcare and education, mitigating the effects of climate change and making food production and distribution more efficient. </p>



<p>But, unfortunately, the potential of “AI for good” is not currently being realised. As research published by the International Monetary Fund last year shows, today, AI technologies are more likely to exacerbate existing global inequalities than to address them. Or, in the words of the speculative fiction writer, William Gibson: “The future is here, it’s just unevenly distributed”.</p>



<p>I am a professor of philosophy of science and the leader of a group concentrating on ethics at the Centre for AI Research. I focus on ensuring that these technologies are developed in a human-centered way for the benefit of all. In order to achieve this, we need equal education, actionable regulation, and true inclusion. These objectives are very far from being met on a global scale, and certainly are not met everywhere in Africa.&nbsp;</p>



<p>This presents a serious moral dilemma to a country such as South Africa. Do we throw all caution to the wind and focus exclusively on becoming a global player in AI technology advancement as fast as possible, or do we pause and consider what measures are needed to ensure our actions will not sacrifice or imperil already vulnerable sectors of our society?&nbsp;</p>



<p>The scramble to develop technologies in the hubs of San Francisco, Austin, London and Beijing took place in a more or less unregulated Wild West until very recently. Now, the world is waking up. In June 2020, United Nations secretary general António Guterres laid out a roadmap for digital co-operation, acknowledging that the responsibility for reaching a global agreement on the ethical development of AI rested on the shoulders of the UN’s Educational, Scientific and Cultural Organisation (Unesco).</p>



<p>Unesco is working to build a global consensus on how governments can harness AI to benefit everyone. A diverse group of 24 specialists from six regions of the world met in 2020 and collaborated to produce a Global Recommendation on the Ethics of AI. If adopted by Unesco’s 193 member states, this agreement on technology development will be groundbreaking: instead of competing with one another to corner the market on bigger and faster technology, countries all over the world will be united by a new common vision; to develop human-centred, ethical artificial intelligence. </p>



<p>One of the biggest obstacles to realising the hope of AI for social good, however, is the silencing of some voices in a debate that should be a universal one. Africa’s best and brightest have been excluded from contributing to the conversation in many ways, ranging from difficulties in accessing visas to not being included in international networks. There is serious and important work being done on the subject in Africa – Data Science Africa and the Deep Learning Indaba, to name two examples. </p>



<p>This work is often overlooked by the international community whereas, in fact, the opportunity the world has to learn from research in Africa should be grabbed. As Moustapha Cisse, director of Google Ghana says: “Being in an environment where the challenges are unique in many ways gives us an opportunity to explore problems that maybe other researchers in other places would not be able to explore.”</p>



<p>In addition, in December last year, following a high-profile parting of ways with Google, Timnit Gebru, the highly regarded ethics researcher, expressed deep concern about the possibility of racial discrimination being amplified by AI technologies: “Unless there is some sort of shift of power, where people who are most affected by these technologies are allowed to shape them as well and be able to imagine what these technologies should look like from the ground up and build them according to that, unless we move towards that kind of future, I am really worried that these tools are going to be used more for harm than good.”</p>



<p>Gebru’s fears are born out by a plethora of examples from racist facial recognition technology to racist predictive policing tools and financial risk analysis. Gebru makes the call that technical communities should be challenged to be more diverse and inclusive, because inherent structural bias in training data would then have a bigger chance of being picked up.&nbsp;</p>



<p>It is also becoming very clear that every person has a role in ensuring that innovation in the field upholds human rights, such as the right to privacy, or the right not to be racially discriminated against. Every person should have access to education, should be sensitised to the ethics of AI and be information literate; every person should have access to positions in tech companies and be able to participate in technological invention, and every person should be protected against possible harm from technologies in an effective and actionable way.&nbsp;</p>



<p>Furthermore, regulations need to be actionable, legally enforceable and as dynamic as the ethics underpinning them. First, we must guard against lofty ideals that are alien to the world of mathematics and algorithms that computer engineers inhabit. It’s key we acknowledge the active multi- and interdisciplinary nature of the discipline of AI in its full extension in our classrooms, places of work, and governmental settings. Second, regulation should be armed with legal force. It is too easy to shirk regulations by citing in-house policies, or shifting some development to countries with weaker legislation in some areas. Third, AI ethics regulation should be supple enough to absorb future technological advances as well as changes in the AI readiness status of different countries which ranges along a continuum of scientific, technological, educational, societal, cultural, infrastructure, economic, legal, regulatory dimensions.&nbsp;</p>



<p>Since any new AI application can be bought or sold anywhere in the world, and since “ethics dumping” – a term coined by the well-known ethics of information expert, Luciano Floridi, referring to big companies simply taking their business where regulation is weaker – is a real thing in Africa, the new rule book on how AI technologies are developed, must be a global rule book. As Teki Akuetteh Falconer, Ghanaian lawyer and executive director of Africa Digital Rights Hub said: “I’m a data protection regulator but unable to call big tech companies to order because they’re not even registered in my country!”</p>



<p>If Unesco’s member states adopt the ethics recommendations, it could pave the way for realising the potential of AI technologies that benefit us all.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-presents-a-moral-dilemma/">Artificial intelligence presents a moral dilemma</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-presents-a-moral-dilemma/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Liverpool scientists deploy Artificial Intelligence to develop model that predicts the next pandemic</title>
		<link>https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/</link>
					<comments>https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Feb 2021 04:42:28 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Develop]]></category>
		<category><![CDATA[Liverpool]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[Predicts]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12888</guid>

					<description><![CDATA[<p>Source &#8211; https://www.timesnownews.com/ A team of scientists at the UK&#8217;s Liverpool University has used artificial intelligence (AI) to work out where the next novel coronavirus could emerge. <a class="read-more-link" href="https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/">Liverpool scientists deploy Artificial Intelligence to develop model that predicts the next pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.timesnownews.com/</p>



<p>A team of scientists at the UK&#8217;s Liverpool University has used artificial intelligence (AI) to work out where the next novel coronavirus could emerge.</p>



<h2 class="wp-block-heading">KEY HIGHLIGHTS</h2>



<ul class="wp-block-list"><li>The COVID-19 pandemic was the first such massive and natural calamity to strike mankind in almost a century.</li></ul>



<ul class="wp-block-list"><li>Mankind had just not provided for such an eventuality and was caught off guard on almost all counts of preparedness.</li></ul>



<ul class="wp-block-list"><li>With climate change being real and threat of pandemics looming large, it would certainly help to know if a disease is going to acquire pandemic proportions.</li></ul>



<p>In a rapidly advancing globalisation that has turned the entire Earth into one huge village, speedy connectivity and communication also ensured a rapid advance of the COVID-19 pandemic that began with a strain of the novel coronavirus that first emerged in Wuhan, China in late 2019. Now, as per a science paper published in Nature Communications, &#8220;The spread of influenza can be modelled and forecast using a machine-learning-based analysis of anonymized mobile phone data. The mobility map, presented in Nature Communications this week, is shown to accurately forecast the spread of influenza in New York City and Australia.&#8221;</p>



<p>The year 2020 dawned with the world bracing to handle a possible crisis and by the end of the year, global deaths reached nearly 2 million.</p>



<p>To cut the long story short, mankind has now been through so much in terms of mental agony, pain, loss, death, long-lasting illnesses and economic downslide &#8211; all on account of this pandemic &#8211; despite rapid advances in science &#8211; that it has begun to dread the prediction by environmentalists and scientists that we have just entered a pandemic era and more such pandemics are likely to come.<br><br><strong>Predicting the onset of a Pandemic:</strong><br>According to a report in the&nbsp;<em>BBC</em>, a team of scientists has used artificial intelligence (AI) to work out where the next novel coronavirus could emerge.</p>



<p>The researchers are reportedly putting to use a combination of learnings from fundamental biology and tools pertaining to machine learning.</p>



<p>This is not mere conjecture and the scientists are taking ahead of what they have gained from similar experiments in the past. Their computer algorithm predicted many more potential hosts of new virus strains that have previously been detected.&nbsp;The findings have been published in the journal&nbsp;<em>Nature Communications.&nbsp;</em></p>



<p>According to this report in&nbsp;<em>Nature Communications</em>, the spread of viral diseases through a population is dependent on interactions between infected people and uninfected people. The Building-models that predict how the diseases will spread across a city or country currently make use of data that are sparse and imprecise, such as commuter surveys or internet search data.</p>



<p>Dr Marcus Blagrove, a virologist from the University of Liverpool, UK, who was involved in the study, emphasises the need to know where the next coronavirus might come from.</p>



<p>&#8220;One way they&#8217;re generated is through recombination between two existing coronaviruses &#8211; so two viruses infect the same cell and they recombine into a &#8216;daughter&#8217; virus that would be an entirely new strain.&#8221;</p>



<p>Scientists say that to get the prediction algorithm right, the first step was to look for species that were able to harbour several viruses at once. Lead researcher Dr Maya Wardeh, who is also from the University of Liverpool, successfully deployed existing biological knowledge to teach the algorithm to search for patterns that made this more likely to happen.</p>



<p>This step concluded that many more mammals were potential hosts for new coronaviruses than previous surveillance work &#8211; screening animals for viruses &#8211; had shown.</p>
<p>The post <a href="https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/">Liverpool scientists deploy Artificial Intelligence to develop model that predicts the next pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence: The new star of remote recruiting?</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-the-new-star-of-remote-recruiting/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-the-new-star-of-remote-recruiting/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Feb 2021 11:20:12 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[recruiting]]></category>
		<category><![CDATA[remote]]></category>
		<category><![CDATA[star]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12712</guid>

					<description><![CDATA[<p>Source &#8211; https://www.dqindia.com/ With the pandemic, companies are accelerating their digital transformation, in particular, to adapt to telework. Buthowtorecruit in total confinement? Thismovementisalsoimpactinghumanresourcesdepartments, which are increasingly using <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-the-new-star-of-remote-recruiting/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-the-new-star-of-remote-recruiting/">Artificial Intelligence: The new star of remote recruiting?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.dqindia.com/</p>



<p>With the pandemic, companies are accelerating their digital transformation, in particular, to adapt to telework. Buthowtorecruit in total confinement? Thismovementisalsoimpactinghumanresourcesdepartments, which are increasingly using artificial intelligence (AI) to recruit.</p>



<p>Conversational robots, or “chatbots”, to perform an initial screening of resumes, even interviews evaluated by a robot, detection of emotions by video. At a time when many candidates are confined, these tools can facilitate recruitment.</p>



<h4 class="wp-block-heading">A growing market</h4>



<p>Some companies use, for example, a pre-recruitment chatbot that allows validating a certain number of prerequisites, before transmitting the most relevant profiles to the company’s top management. The volume of resumes is in great demand, which makes it more difficult for companies to respond satisfactorily without the application of AI. Companies offer a so-called chatbot and a resume analysis tool that identifies suitable candidates for a given role.</p>



<p>People can say “it sucks, it’s a machine that reads my resume”, but the machine has access to much more information than the recruiter would have. Alone in front of his webcam, the candidate answers questions prepared by the recruiter with a limited time. Subsequently, the solution allows automating video analysis. Human resources management is, therefore, evolving to integrate digital technologies and improve performance in the development of human capital.</p>



<p>However, although this competition for talent is intensifying, the number of recruiting teams remains stable.</p>



<h4 class="wp-block-heading">A technological gap to catch up</h4>



<p>Recruiters are still limited by their organizational, cultural, technological and financial skills and barriers that prevent them from accessing more modern tools. However, job seekers are increasingly relying on social media and dedicated apps to find a position.</p>



<p>Some candidates request remote interviews, digital contract signings and electronic pay stubs. Young talents are hyperconnected and accessible in virtual spaces that recruiters find it difficult to invest. This incompatibility between the technologies used by recruiters and those preferred by candidates may explain why supply and demand find it difficult to meet.</p>



<h4 class="wp-block-heading">Gamification</h4>



<p>Several digital technologies are used in the field of e-recruitment. Social networks provide privileged access to a large number of talents worldwide and allow direct communication with them in a friendly and informal way. These networks provide additional and decisive information about potential candidates. Companies can also sponsor courses, promote their employer brands, and identify the top performers.</p>



<p>Gamification also has an increasing interest for companies. This technique puts talents in situations of entertaining virtual universes, which allows them to assess their skills and take a more qualitative approach to recruitment.</p>



<h4 class="wp-block-heading">Objective truth fantasy</h4>



<p>Despite saving time and money, AI can bring risks of standardization of profiles, remaining the expertise of recruiters necessary during the final selection of candidates. Some tools, such as the detection of emotions in the voice, are not so reliable until we have a clear assessment, we should not use amateur or pseudo-amateur tools.</p>



<p>Currently, human resources departments are being highly requested by the top management of companies to use these tools, which at first sight are attractive. There is a fantasy of objective truth in algorithmic processing, but it is more complex, in particular, because they are designed by humans. After continuous validation of solutions for analyzing the emotions of a job vacant candidate, it is still quite questionable to reach any conclusions from facial expressions. This is also true for the analyzes carried out by robots in the interviews.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-the-new-star-of-remote-recruiting/">Artificial Intelligence: The new star of remote recruiting?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-the-new-star-of-remote-recruiting/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why should anyone trust AI?</title>
		<link>https://www.aiuniverse.xyz/why-should-anyone-trust-ai/</link>
					<comments>https://www.aiuniverse.xyz/why-should-anyone-trust-ai/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 05 Oct 2020 10:27:09 +0000</pubDate>
				<category><![CDATA[AI-ONE]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI benefits]]></category>
		<category><![CDATA[AI delivers]]></category>
		<category><![CDATA[Businesses]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[Pandemic]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11945</guid>

					<description><![CDATA[<p>Source: ciodive.com The pandemic has changed the world, accelerating the pace of digital transformation and forcing businesses to find new efficiencies — not only in cost cutting <a class="read-more-link" href="https://www.aiuniverse.xyz/why-should-anyone-trust-ai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-should-anyone-trust-ai/">Why should anyone trust AI?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: ciodive.com</p>



<p>The pandemic has changed the world, accelerating the pace of digital transformation and forcing businesses to find new efficiencies — not only in cost cutting but in decision making.</p>



<p>That&#8217;s why business leaders love AI, because it can help them make better and faster decisions by providing quicker and more accurate insights. It can tell them what customers really want — and what they want right now.</p>



<p>There is a problem though: Consumers don&#8217;t love AI. In one recent study, only 25% said they would trust an AI decision more than one made by a human, and less than 30% were comfortable with businesses using AI to interact with them.</p>



<p>Consumers are skeptical about AI because they don&#8217;t understand it. Many are scared by what they read and hear and not all their fears are misplaced. </p>



<p>Unintentional built-in bias leads algorithms to make decisions, like credit denials, that unfairly discriminate against some minority consumers. It&#8217;s also probable that jobs will be lost to AI. What industry does to mitigate both of these problems will be key to maintaining consumer trust.&nbsp;</p>



<p>Many business leaders attempt to impress customers merely by devising an &#8220;AI strategy&#8221; or building an &#8220;AI system&#8221; — and trumpeting it loudly. But customers are put off by this approach.&nbsp;</p>



<p>What consumers value is the convenience, speed and choice delivered by AI working behind the scenes when they&#8217;re doing things like shopping or seeking health advice.&nbsp;</p>



<p>They&#8217;re impressed by better products and services, greater convenience and better value for money, all of which AI delivers. The disconnect means business leaders need to think about their AI efforts differently, and spend more time educating consumers on how AI benefits them.</p>



<h3 class="wp-block-heading">Not to be denied: AI is doing great things</h3>



<p>Few shoppers realize the overnight arrival of their online purchases depends on AI algorithms getting items packed and shipped expeditiously. How many grasp that AI enables a customer service agent or chatbot to make quick-fire responses to their queries and requests? Few probably understand AI is locating the nearby Lyft driver within seconds of a tap on a smartphone.</p>



<p>The workings of AI are mostly unseen in the financial industry, where they very much benefit both institutions and customers. Take fraud detection. Some earlier protections were simple, blunt instruments.&nbsp;</p>



<p>Sure, they stopped fraudulent transactions, but they also blocked many legitimate ones (&#8220;false positives&#8221;), often resulting in terrible customer experiences. Today, AI-based platforms pinpoint fraud faster and better, reducing false positives by more than half. That equals more legitimate transactions and fewer unhappy customers.</p>



<p>Within organizations, AI is eliminating repetitive tasks and enabling creativity. Business leaders should be asking, how much drudgery can AI push out of the system so my employees can do more creative work?</p>



<h3 class="wp-block-heading">Diversity and discretion</h3>



<p>What lessons can business leaders draw from all this? One is to tackle the bias issue — the danger that flawed algorithms make decisions unfairly, disadvantaging consumers.&nbsp;</p>



<p>The first step is to understand that the bias isn&#8217;t in the AI but in the data, whether it&#8217;s data used initially to train the algorithm or data entered to generate insights. The best way to minimize bias is through diversity — finding and using the widest possible range of datasets. Not just quantity but variety.&nbsp;</p>



<p>In human interaction, we fight bias by talking to people with different perspectives, by working with people that challenge and question the status quo. The same approach needs to be taken to the data we use to inform our algorithms.</p>



<p>Different groups of business stakeholders need different talking points when it comes to AI. Analysts and investors may be impressed with tales of tech capabilities, but customers just want to hear about better services or products, and that their data is being strongly protected and ethically used.&nbsp;</p>



<p>We&#8217;re more likely to build trust by flat out saying, &#8220;We&#8217;re not going to compromise your data.&#8221;&nbsp;</p>



<p>Business leaders need to understand AI&#8217;s limitations. AI is a tool, not a product, a strategy or a business model. Powerful it may be, but AI is only assisting us in doing the things we&#8217;ve always strived to be: smarter, faster, more efficient and, hopefully, more trustworthy</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-should-anyone-trust-ai/">Why should anyone trust AI?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-should-anyone-trust-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The impact of Covid-19 on Big Data</title>
		<link>https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/</link>
					<comments>https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Jun 2020 07:30:21 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[EVOLUTION]]></category>
		<category><![CDATA[Pandemic]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9295</guid>

					<description><![CDATA[<p>Source: itproportal.com Big Data has been touted as a potential panacea to the global pandemic, Covid-19. But the technology needs to evolve to meet the demands of <a class="read-more-link" href="https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/">The impact of Covid-19 on Big Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: itproportal.com</p>



<p>Big Data has been touted as a potential panacea to the global pandemic, Covid-19. But the technology needs to evolve to meet the demands of this crisis.</p>



<p>Big data is unstructured, arriving in tremendous volume, variety and velocity from a variety of heterogeneous and inconsistent sources. And while extract transform load (ETL) processes are used for structuring and warehousing the data in a way that enables meaningful modelling and analysis, tools such as Spark and Hadoop require specialist engineers to manually tune various aspects of the pipeline &#8211; a slow and costly process.</p>



<p>SMoreover, solving the problem of modelling and analysis through ETL pipelines requires the use of data science, machine learning and scientific computing, which are extremely performance intensive. The solutions typically revolve around supercomputing or high-performance computing (HPC) approaches.</p>



<h4 class="wp-block-heading" id="big-data-and-hpc-in-the-age-of-cloud-computing">Big data and HPC in the age of cloud computing</h4>



<p>The first generation of cloud where big data began was about throwing cheap commodity hardware at a large data problem. The applications tended not to be very computationally-intensive (processor-bound), but rather data-intensive (disk/memory bound). Interest in making optimal use of processor and interconnect was at best a second-class concern.</p>



<p>Although the big data ecosystem has since made inroads into performance-based computing, limitations remain in the technological approach. They tend to be Java based and lack bare metal performance &#8211; as well as the predictable execution that is required to make performance guarantees in a large system.</p>



<p>Approaches such as MPI were built in an era where the resources of a given supercomputer were known ahead of time, and were time-shared. The supercomputer was in-demand for a pipeline of highly tuned and specialised problems to be serviced over its lifetime. Algorithms were carefully tuned to make optimal use of the available hardware.</p>



<p>Big data technologies are designed to take a more genericised approach, not requiring careful optimisation on the hardware, but they still remain complex and require teams with specialist skills to build a specific set of algorithms at a specific scale. Scaling beyond a given implementation, or adding additional algorithmic capability, requires further reengineering and projects can take several years. The infrastructure costs become massive.</p>



<h4 class="wp-block-heading" id="rethinking-the-computing-model">Rethinking the computing model</h4>



<p>The inexorable future of computing is the cloud, and its evolutionary manifestations: edge computing, high-speed interconnect and low-latency/high-bandwidth communications. Powerful and capable hardware will be made on demand, applications will run the gamut from big data/small compute to small/data big compute and, inevitably, big data/big compute.</p>



<p>Therefore, a more effective approach to building large-scale systems is through an accessible HPC-like technology that is designed from first principles and capable of harnessing the cloud. The cloud offers the benefit of on-demand availability and the ever improving processes and interconnect.</p>



<p>However, such a landscape requires a radical rethink in order to unlock and exploit the true power of computing. Truly harnessing the power of the cloud, requires a scale invariant model for computing, which can build algorithms and run them at an arbitrary scale, whether on a process axis (compute) or memory axis (data).</p>



<p>The opportunity lies in building a model that allows programs to be distribution and location agnostic. Applications that dynamically scale based on runtime demand, whether to handle the vast influx of data in real-time or to crunch enormous matrices and tensors to unlock some critical insight.</p>



<p>It ensures a developer can write algorithms without worrying about scaling, infrastructure or devops concerns. Just as today a programmer, a scientist or machine learning expert can build a small data/small compute model on a laptop, they will equally run that model at arbitrary scale on a data centre without the impediments of team-size, manual effort, and time. The net result is users ship faster, in smaller teams, and at lower cost. Moreover the need for a national supercomputer is diminished further, as engineers are able to deal with massive datasets, and crunch them with the most compute-intensive algorithms demanded all on the democratised hardware of the cloud.</p>



<h4 class="wp-block-heading" id="applying-big-data-applications-to-covid-19">Applying big data applications to Covid-19</h4>



<p>The impact of Covid-19 has drawn international attention to the role technology can play in understanding its spread, impact and the mitigating steps we can take.</p>



<p>There are currently a number of models and simulations being used to address the impact of virus transmission, whether is the spread from person-to-person, how virus the transmits within an individual, or a combination of the two. However, real timerealtime simulation, and even non-real timenon-realtime but massive simulation, is an incredibly complicated compute problem. The big data ecosystem is not remotely equal to the task. The solution requires not just a supercomputing approach to solve this problem, but also must solve the dynamic scalability problem &#8211; which is not the province of supercomputers.</p>



<p>It requires a platform that is both big data capable and big compute capable. It must leverage the cloud to scale dynamically using only the resources it requires at any given instant in time, as well as using all the resources it requires at any instant when the need arises. The development of these technologies is now being expedited as developing the infrastructure to develop accurate models that use vast data sets, combined with the physiology and genomic of individuals has become a global priority.</p>



<p>In turn the technology will usher in an era where drug therapies will be specifically optimised to the individual. A personalised approach to healthcare will enable a rigorously scientific approach not just to the eradication of illness but optimise our wellbeing and happiness. Although we need to see the impact of these developments before racing to conclusions, as we track our lives and health with richer data than ever before, we will discover things about health, wellbeing and longevity that seem inconceivable today.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/">The impact of Covid-19 on Big Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Ways to get around being data mined</title>
		<link>https://www.aiuniverse.xyz/ways-to-get-around-being-data-mined/</link>
					<comments>https://www.aiuniverse.xyz/ways-to-get-around-being-data-mined/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 16 May 2020 05:48:59 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8807</guid>

					<description><![CDATA[<p>Source: In the wake of the Covid-19 pandemic, consumers and businesses are relying more on online services than ever before. The UAE has lifted its ban on <a class="read-more-link" href="https://www.aiuniverse.xyz/ways-to-get-around-being-data-mined/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ways-to-get-around-being-data-mined/">Ways to get around being data mined</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: </p>



<p>In the wake of the Covid-19 pandemic, consumers and businesses are relying more on online services than ever before.</p>



<p>The UAE has lifted its ban on certain VoIP apps to enable students to attend online classes and help residents work remotely from their homes.</p>



<p>Be it official, academic or for entertainment, an unprecedented number of people are relying on the internet and various apps throughout the day. Given this volume and the emergence of heavy users, software companies are gathering and selling private data at an alarming rate.</p>



<p>Advertisers seek to target consumers with specific traits and interests. Surreptitious software — riding on the deception of free products — relentlessly monitors your actions, clicks, and conversations with the primary motive of uncovering your personal habits and interests. So that it may drop you into a “marketable and actionable segment” that is then packaged up and sold to advertisers.<ins></ins><ins></ins><ins></ins></p>



<h4 class="wp-block-heading">Have to live with it for now</h4>



<p>Data mining is inescapable, whether we are wary of it or not. The Covid-19 situation has caused a sudden surge in the usage of online video and audio conferencing software. Their privacy practices have gained attention since they are now being used on a daily basis. Some like Zoom claim to be end-to-end encrypted, but a closer look may tell us otherwise.</p>



<p>Running out of choices, we are forced to use them to share our business and personal details by which the details are mined and used for ad targeting.</p>



<p>Businesses are also adept at pulling in data from nearly every nook and cranny. The most obvious place is from consumer activity on their websites.</p>



<h4 class="wp-block-heading">But an invasion it is</h4>



<p>Now, we do need to concede that software does get better when it can observe its user, thereby doing something smarter for the user within a given context. Spellcheckers work this way. Their sole intention is to uncover errors and offer you the chance to correct them.</p>



<p>The implicit contract with the user is that all user information is restricted for this explicit purpose. This allows users to trust their spellcheckers. But this trust cannot hold when there are powerful business incentives to channel user information into consumer characterisations, sold to the highest bidder. However this might be spun, make no mistake: it becomes a flagrant invasion of your privacy.</p>



<p>That’s one reason why we have never followed an ad-model, even in our free products. The idea is that if businesses like it enough, they will move to the paid version as they grow.</p>



<h4 class="wp-block-heading">Check on compliance</h4>



<p>However, using a free product doesn’t necessarily mean that your privacy is being invaded. Many software providers have made the paid versions of their software free to help businesses tide over the current crisis. The key is ad monetisation.</p>



<p>If the company’s revenue depends on ads, they have every incentive to mine the data, and they will. While choosing a software to take your business remote, ensure you take a close look at the vendor’s security and privacy practices. A good place to start will be to see if they are GDPR-compliant.</p>



<p>In the last few years, the conversation around privacy has become mainstream, with governments around the world taking cognisance of the issue and implementing laws to protect the consumers. Now, at a time when businesses are forced to choose third-party tools in order to maintain business continuity, it becomes even more important to take privacy into account and not make hasty decisions that may have long-lasting impact.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ways-to-get-around-being-data-mined/">Ways to get around being data mined</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ways-to-get-around-being-data-mined/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
