<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>EVOLUTION Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/evolution/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/evolution/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 07 Jun 2021 05:50:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>The Evolution of A.I.</title>
		<link>https://www.aiuniverse.xyz/the-evolution-of-a-i/</link>
					<comments>https://www.aiuniverse.xyz/the-evolution-of-a-i/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 07 Jun 2021 05:50:54 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[A.I.]]></category>
		<category><![CDATA[EVOLUTION]]></category>
		<category><![CDATA[milestones]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14065</guid>

					<description><![CDATA[<p>Source &#8211; https://goodmenproject.com/ This first blog post will focus on how Artificial Intelligence (AI) evolved throughout the years, I will mention and describe some of the milestones <a class="read-more-link" href="https://www.aiuniverse.xyz/the-evolution-of-a-i/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-evolution-of-a-i/">The Evolution of A.I.</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://goodmenproject.com/</p>



<p>This first blog post will focus on how Artificial Intelligence (AI) evolved throughout the years, I will mention and describe some of the milestones of this evolution and analyse how it became part of our lives and how it will become even more important in the future. Digital technology has been one of the fastest-growing tech in the history of mankind, today our phones have hundreds of GB of storage and process data faster than a computer would a few decades ago, the only other branches of technology that have developed this quickly are weapons. It is not rare to hear how computers were ‘as big as a room’ but one of the most interesting examples of the leap of digital technology, specifically information technology, dates back to 1969 when a computer with only 14kb of ram steered and landed the Saturn V and its human hosts on the moon. To put that into perspective, the blog that I am now writing will occupy more than 14kb of space in my storage, nowadays, we are used of talking about Megabytes or Gigabytes, or even Terabytes. Using kilobytes for everyday use would be like weighing a person in grams. As per computer technology, AI is similarly comparable when it comes to fast technological progression. Before discussing the different milestone that paved the way for modern AI it is important to understand what AI is. A machine has artificial intelligence if it can interpret data, potentially learn from that data, and use that knowledge to adapt and achieve specific goals or, as I saw in a YouTube comment, “AI is math that mimics what a human would do” (Singh, 2019, 100 likes).</p>



<p>The difference between AI and other machines is, in fact, the intelligence part; our laptops are incredibly good at storing and processing data, but they do not know much about interpreting that data, let alone learn from it. The story of AI begins in 1951 when the first AI programs were written. They were very simple programs; one played checkers and the other one played chess, the latter became more important since chess developed into the new ‘Moon Landing’ of the AI race. The next big milestone in machine learning would have occurred when a machine would be able to beat a human at chess, this occurred in 1956 when MANIAC (???) defeated a novice chess player at the Los Alamos Scientific Laboratory, very boring. So, the bar was moved upwards, now the new milestone was beating a world-class chess player, a Grandmaster, at his own game, humans, in general, were too easy to beat.</p>



<p>Meanwhile, the first AI program that could understand human language was created in 1964 and the following year the first chatbot, ELIZA, was invented. Fast forward 9 years and in 1974 the first autonomous vehicle was created. And then, 15 years later, the first autonomous vehicle using a neural network; a technique that performs machine learning by mimicking the network of neurons we have in our brain. This is all very interesting but back to the crucial milestone in AI, beating a Grandmaster at chess. For that the world had to wait until 1996 when Deep Blue, an AI created by IBM, defeated Garry Kasparov, the reigning world chess champion at the time, this version of Deep Blue calculated 100 million positions per second, which was still not enough to win the match, Kasparov won the three following games and drew twice. So, IBM went back to the drawing board and went for a second try a year later. The 1997 version of Deep Blue could calculate 200 million positions per second, twice as much as the year prior. This version of Deep Blue was made to win a match, a game was not enough anymore, and so it did. After a win, a loss and three draws Deep Blue crushed Kasparov in the overtime, winning game 6 and the match.</p>



<p>From there on the development grows exponentially, the more AI develops, the faster it develops. In 2002 iRobot introduces the Roomba, an autonomous vacuum cleaner, which is able to detect and avoid obstacles, mapping the environment it is cleaning for better efficiency, and always goes back to its charging post once it’s done cleaning. Two years later the U.S. military starts to consider the importance of autonomous vehicles and the Defence Advanced Research Projects Agency (DARPA) creates the DARPA Grand Challenge, a competition for autonomous vehicles. The latter will later develop into the Urban Challenge, created in 2007, in which autonomous vehicles are required to respect traffic laws and operate in an urban environment. In 2009 Google begins the development of its own autonomous vehicle. In 2011 virtual assistants like Siri and Cortana are implemented into various devices. Now, in 2020, Tesla already has a car that can drive itself, YouTube uses an AI to suggests videos based on our preferences, and Google ads are customized based on our tastes and DARPA has a program that can defeat fighter jet pilots in a simulated dogfight. AI is already a big part of our lives, Amazon suggest items to buy based on what is in our cart, Gmail filter e-mail by analysing the content (they have to work on that a bit more) Facebook is able to detect suicidal thinking patterns by analysing posts, Google knows what you are searching for even before you finish your sentence, Grammarly and Microsoft word help us in correcting and improving our essays based on what type of writing we are doing (academic, informal), what our audience is (knowledgeable or general) and what the focus is (Business, Technical).</p>



<p>In the upcoming years, AI will acquire more importance in our lives, the number of autonomous cars on the roads will grow exponentially, the number of times Siri will not understand what we’re saying will diminish as its ability to understand human language and thought will improve, maybe even our local bar will have robot barmen that prepare cocktails or suggest new beers based on our tastes. All of this will occur in a couple of decades maximum, AI technology is growing faster than ever now, and this growth will only become quicker.</p>



<p>What do you think about the future of AI? In your opinion is it a bright or an uncertain future that of the coexistence of humans and AI?</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-evolution-of-a-i/">The Evolution of A.I.</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-evolution-of-a-i/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep learning&#8217;s role in the evolution of machine learning</title>
		<link>https://www.aiuniverse.xyz/deep-learnings-role-in-the-evolution-of-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/deep-learnings-role-in-the-evolution-of-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 02 Jul 2020 06:37:01 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[EVOLUTION]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9915</guid>

					<description><![CDATA[<p>Source: Machine learning had a rich history long before deep learning reached fever pitch. Researchers and vendors were using machine learning algorithms to develop a variety of <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learnings-role-in-the-evolution-of-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learnings-role-in-the-evolution-of-machine-learning/">Deep learning&#8217;s role in the evolution of machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: </p>



<p>Machine learning had a rich history long before deep learning reached fever pitch. Researchers and vendors were using machine learning algorithms to develop a variety of models for improving statistics, recognizing speech, predicting risk and other applications.</p>



<p>While many of the machine learning algorithms developed over the decades are still in use today, deep learning &#8212; a form of machine learning based on multilayered neural networks &#8212; catalyzed a renewed interest in AI and inspired the development of better tools, processes and infrastructure for all types of machine learning.</p>



<p>Here, we trace the significance of deep learning in the evolution of machine learning, as interpreted by people active in the field today.</p>



<h3 class="wp-block-heading">The birth of machine learning</h3>



<p>The story of machine learning starts in 1943 when neurophysiologist Warren McCulloch and mathematician Walter Pitts introduced a mathematical model of a neural network. The field gathered steam in 1956 at a summer conference on the campus of Dartmouth College. There, 10 researchers came together for six weeks to lay the ground for a new field that involved neural networks, automata theory and symbolic reasoning.</p>



<p>The distinguished group, many of whom would go on to make seminal contributions to this new field, gave it the name <em>artificial intelligence </em>to distinguish it from cybernetics, a competing area of research focused on control systems. In some ways these two fields are now starting to converge with the growth of IoT, but that is a topic for another day.</p>



<p>Early neural networks were not particularly useful &#8212; nor deep. Perceptrons, the single-layered neural networks in use then, could only learn linearly separable patterns. Interest in them waned after Marvin Minsky and Seymour Papert published the book Perceptrons in 1969, highlighting the limitations of existing neural network algorithms and causing the emphasis in AI research to shift.</p>



<p>&#8220;There was a massive focus on symbolic systems through the &#8217;70s, perhaps because of the idea that perceptrons were limited in what they could learn,&#8221; said Sanmay Das, associate professor of computer science and engineering at Washington University in St. Louis and chair of the Association for Computing Machinery&#8217;s special interest group on AI.</p>



<p>The 1973 publication of Pattern Classification and Scene Analysis by Richard Duda and Peter Hart introduced other types of machine learning algorithms, reinforcing the shift away from neural nets. A decade later, Machine Learning: An Artificial Intelligence Approach by Ryszard S. Michalski, Jaime G. Carbonell and Tom M. Mitchell further defined machine learning as a domain driven largely by the symbolic approach.</p>



<p>&#8220;That catalyzed a whole field of more symbolic approaches to [machine learning] that helped frame the field. This led to many Ph.D. theses, new journals in machine learning, a new academic conference, and even helped to create new laboratories like the NASA Ames AI Research branch, where I was deputy chief in the 1990s,&#8221; said Monte Zweben, CEO of Splice Machine, a scale-out SQL platform.</p>



<p>In the 1990s, the evolution of machine learning made a turn. Driven by the rise of the internet and increase in the availability of usable data, the field began to shift from a knowledge-driven approach to a data-driven approach, paving the way for the machine learning models that we see today.</p>



<h3 class="wp-block-heading">The beginnings of deep learning</h3>



<p>The turn toward data-driven machine learning in the 1990s was built on research done by Geoffrey Hinton at the University of Toronto in the mid-1980s. Hinton and his team demonstrated the ability to use backpropagation to build deeper neural networks.</p>



<p>&#8220;This was a major breakthrough enabling new kinds of pattern recognition that were previously not feasible with neural nets,&#8221; Zweben said. This added new layers to the networks and a way to strengthen or weaken connections back across many layers in the network, leading to the term deep learning.</p>



<p>Although possible in a lab setting, deep learning did not immediately find its way into practical applications, and progress stalled.</p>



<p>&#8220;Through the &#8217;90s and &#8217;00s, a joke used to be that &#8216;neural networks are the second-best learning algorithm for any problem,'&#8221; Washington University&#8217;s Das said.</p>



<p>Meanwhile, commercial interest in AI was starting to wane because the hype around developing an AI on par with human intelligence had gotten ahead of results, leading to an AI winter, which lasted through the 1980s. What did gain momentum was a type of machine learning using kernel methods and decision trees that enabled practical commercial applications.</p>



<p>Still, the field of deep learning was not completely in retreat. In addition to the ascendancy of the internet and increase in available data, another factor proved to be an accelerant for neural nets, according to Zweben: namely, distributed computing.</p>



<h3 class="wp-block-heading">Democratization and the rise of deep learning</h3>



<p>Machine learning requires a lot of compute. In the early days, researchers had to keep their problems small or gain access to expensive supercomputers, Zweben said. The democratization of distributed computing in the early 2000s enabled researchers to run calculations across clusters of relatively low-cost commodity computers.</p>



<p>&#8220;Now, it is relatively cheap and easy to experiment with hundreds of models to find the best combination of data features, parameters and algorithms,&#8221; Zweben said. The industry is pushing this democratization even further with practices and associated tools for machine learning operations that bring DevOps principles to machine learning deployment, he added.</p>



<p>Machine learning is also only as good as the data it is trained on, and if data sets are small, it is harder for the models to infer patterns. As the data created by mobile, social media, IoT and digital customer interactions grew, it provided the training material deep learning techniques needed to mature.</p>



<p>By 2012, deep learning attained star status after Hinton&#8217;s team won ImageNet, a popular data science challenge, for their work on classifying images using neural networks. Things really accelerated after Google subsequently demonstrated an approach to scaling up deep learning across clusters of distributed computers.</p>



<p>&#8220;The last decade has been the decade of neural networks, largely because of the confluence of the data and computational power necessary for good training and the adaptation of algorithms and architectures necessary to make things work,&#8221; Das said.</p>



<h3 class="wp-block-heading">5 ways deep learning is changing the field of machine learning</h3>



<p>Even when deep neural networks are not used directly, they indirectly drove &#8212; and continue to drive &#8212; fundamental changes in the field of machine learning, including the following:</p>



<h4 class="wp-block-heading">Framing a problem</h4>



<p>Deep learning&#8217;s predictive power has inspired data scientists to think about different ways of framing problems that come up in other types of machine learning.</p>



<p>&#8220;There are many problems that we didn&#8217;t think of as prediction problems that people have reformulated as prediction problems &#8212; language, vision, etc. &#8212; and many of the gains in those tasks have been possible because of this reformulation,&#8221; said Nicholas Mattei, assistant professor of computer science at Tulane University and vice chair of the Association for Computing Machinery&#8217;s special interest group on AI.<br><br>In language processing, for example, a lot of the focus has moved toward predicting what comes next in the text. In computer vision as well, many problems have been reformulated so that, instead of trying to understand geometry, the algorithms are predicting labels of different parts of an image.</p>



<h4 class="wp-block-heading">Automation of human insight</h4>



<p>The power of big data and deep learning is changing how models are built. Human analysis and insights are being replaced by raw compute power.<br><br>&#8220;Now, it seems that a lot of the time we have substituted big databases, lots of GPUs, and lots and lots of machine time to replace the deep problem introspection needed to craft features for more classic machine learning methods, such as SVM [support vector machine] and Bayes,&#8221; Mattei said, referring to the Bayesian networks used for modeling the probabilities between observations and outcomes.<br><br>The art of crafting a machine learning problem has been taken over by advanced algorithms and the millions of hours of CPU time baked into pretrained models so data scientists can focus on other projects or spend more time on customizing models.</p>



<h4 class="wp-block-heading">More efficient use of data</h4>



<p>Deep learning is also helping data scientists solve problems with smaller data sets and to solve problems in cases where the data has not been labeled.</p>



<p>&#8220;One of the most relevant developments in recent times has been the improved use of data, whether in the form of self-supervised learning, improved data augmentation, generalization of pretraining tasks or contrastive learning,&#8221; said Juan José López Murphy, AI and big data tech director lead at Globant, an IT consultancy.<br><br>These techniques reduce the need for manually tagged and processed data. This is enabling researchers to build large models that can capture complex relationships representing the nature of the data and not just the relationships representing the task at hand. López Murphy is starting to see transfer learning being adopted as a baseline approach, where researchers can start with a pretrained model that only requires a small amount of customization to provide good performance on many common tasks.</p>



<h4 class="wp-block-heading">Better context</h4>



<p>There are specific fields where deep learning provides a lot of value, in image, speech and natural language processing, for example, as well as time series forecasting.</p>



<p>&#8220;The broader field of machine learning is enhanced by deep learning and its ability to bring context to intelligence. Deep learning also improves [machine learning&#8217;s] ability to learn nonlinear relationships and manage dimensionality with systems like autoencoders,&#8221; said Luke Taylor, founder and COO at TrafficGuard, an ad fraud protection service.<br><br>For example, deep learning can find more efficient ways to auto encode the raw text of characters and words into vectors representing the similarity and differences of words, which can improve the efficiency of the machine learning algorithms used to process it. Deep learning algorithms that can recognize people in pictures make it easier to use other algorithms that find associations between people.<br><br>More recently, there have been significant jumps using deep learning to improve the use of image, text and speech processing through common interfaces. People are accustomed to speaking to virtual assistants on their smartphones and using facial recognition to unlock devices and identify friends in social media.<br><br>&#8220;This broader adoption creates more data, enables more machine learning refinement and increases the utility of machine learning even further, pushing even further adoption of this tech into people&#8217;s lives,&#8221; Taylor said.</p>



<h4 class="wp-block-heading">Democratizing the tools</h4>



<p>Early machine learning research required expensive software licenses. But deep learning pioneers began open sourcing some of the most powerful tools, which has set a precedent for all types of machine learning.</p>



<p>&#8220;Earlier, machine learning algorithms were bundled and sold under a licensed tool. But, nowadays, open source libraries are available for any type of AI applications, which makes the learning curve easy,&#8221; said Sachin Vyas, vice president of data, AI and automation products at LTI, an IT consultancy.</p>



<p>Another factor in democratizing access to machine learning tools has been the rise of Python.</p>



<p>&#8220;The wave of open source frameworks for deep learning cemented the prevalence of Python and its data ecosystem for research, development and even production,&#8221; Globant&#8217;s López Murphy said.<br><br>Many of the different commercial and free options got replaced, integrated or connected to a Python layer for widespread use. As a result, Python has become the de facto lingua franca for machine learning development.</p>



<p>Deep learning has also inspired the open source community to automate and simplify other aspects of the machine learning development lifecycle. &#8220;Thanks to things like graphical user interfaces and [automated machine learning], creating working machine learning models is no longer limited to Ph.D. data scientists,&#8221; Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting, said.</p>



<h4 class="wp-block-heading">Conclusion</h4>



<p>For machine learning to keep evolving, enterprises will need to find a balance between developing better applications and respecting privacy.</p>



<p>Data scientists will need to be more proactive in understanding where their data comes from and the biases that may inadvertently be baked into it, as well as develop algorithms that are transparent and interpretable. They also need to keep pace with new machine learning protocols and the different ways these can be woven together with various data sources to improve applications and decisions.</p>



<p>&#8220;Machine learning provides more innovative applications for end users, but unless we&#8217;re choosing the right data sets and advancing deep learning protocols, machine learning will never make the transition from computing a few results to providing actual intelligence,&#8221; said Justin Richie, director of data science at Nerdery, an IT consultancy.</p>



<p>&#8220;It will be interesting to see how this plays out in different industries and if this progress will continue even as data privacy becomes more stringent,&#8221; Richie said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learnings-role-in-the-evolution-of-machine-learning/">Deep learning&#8217;s role in the evolution of machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learnings-role-in-the-evolution-of-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The impact of Covid-19 on Big Data</title>
		<link>https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/</link>
					<comments>https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Jun 2020 07:30:21 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[EVOLUTION]]></category>
		<category><![CDATA[Pandemic]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9295</guid>

					<description><![CDATA[<p>Source: itproportal.com Big Data has been touted as a potential panacea to the global pandemic, Covid-19. But the technology needs to evolve to meet the demands of <a class="read-more-link" href="https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/">The impact of Covid-19 on Big Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: itproportal.com</p>



<p>Big Data has been touted as a potential panacea to the global pandemic, Covid-19. But the technology needs to evolve to meet the demands of this crisis.</p>



<p>Big data is unstructured, arriving in tremendous volume, variety and velocity from a variety of heterogeneous and inconsistent sources. And while extract transform load (ETL) processes are used for structuring and warehousing the data in a way that enables meaningful modelling and analysis, tools such as Spark and Hadoop require specialist engineers to manually tune various aspects of the pipeline &#8211; a slow and costly process.</p>



<p>SMoreover, solving the problem of modelling and analysis through ETL pipelines requires the use of data science, machine learning and scientific computing, which are extremely performance intensive. The solutions typically revolve around supercomputing or high-performance computing (HPC) approaches.</p>



<h4 class="wp-block-heading" id="big-data-and-hpc-in-the-age-of-cloud-computing">Big data and HPC in the age of cloud computing</h4>



<p>The first generation of cloud where big data began was about throwing cheap commodity hardware at a large data problem. The applications tended not to be very computationally-intensive (processor-bound), but rather data-intensive (disk/memory bound). Interest in making optimal use of processor and interconnect was at best a second-class concern.</p>



<p>Although the big data ecosystem has since made inroads into performance-based computing, limitations remain in the technological approach. They tend to be Java based and lack bare metal performance &#8211; as well as the predictable execution that is required to make performance guarantees in a large system.</p>



<p>Approaches such as MPI were built in an era where the resources of a given supercomputer were known ahead of time, and were time-shared. The supercomputer was in-demand for a pipeline of highly tuned and specialised problems to be serviced over its lifetime. Algorithms were carefully tuned to make optimal use of the available hardware.</p>



<p>Big data technologies are designed to take a more genericised approach, not requiring careful optimisation on the hardware, but they still remain complex and require teams with specialist skills to build a specific set of algorithms at a specific scale. Scaling beyond a given implementation, or adding additional algorithmic capability, requires further reengineering and projects can take several years. The infrastructure costs become massive.</p>



<h4 class="wp-block-heading" id="rethinking-the-computing-model">Rethinking the computing model</h4>



<p>The inexorable future of computing is the cloud, and its evolutionary manifestations: edge computing, high-speed interconnect and low-latency/high-bandwidth communications. Powerful and capable hardware will be made on demand, applications will run the gamut from big data/small compute to small/data big compute and, inevitably, big data/big compute.</p>



<p>Therefore, a more effective approach to building large-scale systems is through an accessible HPC-like technology that is designed from first principles and capable of harnessing the cloud. The cloud offers the benefit of on-demand availability and the ever improving processes and interconnect.</p>



<p>However, such a landscape requires a radical rethink in order to unlock and exploit the true power of computing. Truly harnessing the power of the cloud, requires a scale invariant model for computing, which can build algorithms and run them at an arbitrary scale, whether on a process axis (compute) or memory axis (data).</p>



<p>The opportunity lies in building a model that allows programs to be distribution and location agnostic. Applications that dynamically scale based on runtime demand, whether to handle the vast influx of data in real-time or to crunch enormous matrices and tensors to unlock some critical insight.</p>



<p>It ensures a developer can write algorithms without worrying about scaling, infrastructure or devops concerns. Just as today a programmer, a scientist or machine learning expert can build a small data/small compute model on a laptop, they will equally run that model at arbitrary scale on a data centre without the impediments of team-size, manual effort, and time. The net result is users ship faster, in smaller teams, and at lower cost. Moreover the need for a national supercomputer is diminished further, as engineers are able to deal with massive datasets, and crunch them with the most compute-intensive algorithms demanded all on the democratised hardware of the cloud.</p>



<h4 class="wp-block-heading" id="applying-big-data-applications-to-covid-19">Applying big data applications to Covid-19</h4>



<p>The impact of Covid-19 has drawn international attention to the role technology can play in understanding its spread, impact and the mitigating steps we can take.</p>



<p>There are currently a number of models and simulations being used to address the impact of virus transmission, whether is the spread from person-to-person, how virus the transmits within an individual, or a combination of the two. However, real timerealtime simulation, and even non-real timenon-realtime but massive simulation, is an incredibly complicated compute problem. The big data ecosystem is not remotely equal to the task. The solution requires not just a supercomputing approach to solve this problem, but also must solve the dynamic scalability problem &#8211; which is not the province of supercomputers.</p>



<p>It requires a platform that is both big data capable and big compute capable. It must leverage the cloud to scale dynamically using only the resources it requires at any given instant in time, as well as using all the resources it requires at any instant when the need arises. The development of these technologies is now being expedited as developing the infrastructure to develop accurate models that use vast data sets, combined with the physiology and genomic of individuals has become a global priority.</p>



<p>In turn the technology will usher in an era where drug therapies will be specifically optimised to the individual. A personalised approach to healthcare will enable a rigorously scientific approach not just to the eradication of illness but optimise our wellbeing and happiness. Although we need to see the impact of these developments before racing to conclusions, as we track our lives and health with richer data than ever before, we will discover things about health, wellbeing and longevity that seem inconceivable today.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/">The impact of Covid-19 on Big Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-impact-of-covid-19-on-big-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
