<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>future technology Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/future-technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/future-technology/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 05 Aug 2019 12:54:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Computer Science Curriculums Must Emphasize Privacy Over Capability</title>
		<link>https://www.aiuniverse.xyz/computer-science-curriculums-must-emphasize-privacy-over-capability/</link>
					<comments>https://www.aiuniverse.xyz/computer-science-curriculums-must-emphasize-privacy-over-capability/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 05 Aug 2019 12:54:32 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[antithetical]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[data availability]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[digital revolution]]></category>
		<category><![CDATA[future technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4273</guid>

					<description><![CDATA[<p>Source: forbes.com The idea of privacy is in many ways antithetical to the data-driven mindset promoted by computer science curriculums over the past decade. Massive advances in <a class="read-more-link" href="https://www.aiuniverse.xyz/computer-science-curriculums-must-emphasize-privacy-over-capability/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/computer-science-curriculums-must-emphasize-privacy-over-capability/">Computer Science Curriculums Must Emphasize Privacy Over Capability</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: forbes.com</p>



<p>The idea of privacy is in many ways antithetical to the data-driven mindset promoted by computer science curriculums over the past decade. Massive advances in data availability and analytic capabilities has led to a reshaping of many curriculums towards coursework teaching how to manage, explore, assess, understand and exploit these newfound digital riches. Deep learning and other analytics courses are frequently filled beyond capacity. In contrast, the idea that programmers should give up their data riches in the name of privacy is a view that receives little attention in many curriculums.</p>



<p>Today’s computer science curriculums have centralized around preparing tomorrow’s future technology leaders to harness the digital revolution. From managing “big data” to making sense of it through deep learning, coursework has heavily emphasized the positives of today’s digital deluge rather than the considerable negatives it can wreak on privacy, safety and security.</p>



<p>Privacy was historically far too often lumped under the heading of cybersecurity and relegated to an afterthought. Privacy violations were largely viewed in the context of companies losing control of customer data, rather than deliberately harnessing that data in privacy-invading manners.</p>



<p>A company that harvested massive amounts of data from its customers, held onto it without any form of cyber intrusion and lawfully resold that data to others was often viewed as privacy-protecting, since it successfully safeguarded the data in its hands from loss.</p>



<p>An increasing number of programs have begun to integrate some form of ethical training into their curriculums. Yet here the focus has largely been on the programmer themselves, emphasizing how to think about what they do with users’ data and how to address biases in their designs. Such courses may emphasize topics like AI explainability to mitigate inadvertent demographic bias and consideration of algorithmic harm, such as why building an AI system that can forcibly uncover members of vulnerable communities could cause grave harm and thus should not be pursued even if it represents a great technical achievement.</p>



<p>Some programs teach compliance with privacy laws like GDPR, but this guidance typically revolves around “minimal minimization” in which data collection is adjusted to fit the letter of the law, if not its spirit,&nbsp;utilizing the laws’ myriad loopholes and exemptions.</p>



<p>In contrast, library and information science curriculums have historically emphasized privacy and civil liberties concerns, especially the minimization of data collection. In contrast to the digital behemoths racing behind us vacuuming up every byte of our online behavior and interests, libraries have historically adopted precisely the opposite stance, keeping only the bare minimum of information they need and deleting data the first moment they can.</p>



<p>Libraries have historically had enormous insights into our most intimate and unfettered interests, often recording our information consumption from the first children’s books our parents checked out of the library to read to us as infants. If libraries kept this information they could build incredible personalized recommendation systems and generate lucrative revenue streams reselling that data or making it available for advertising.</p>



<p>Instead, most public libraries have practiced absolute minimization in which every data point is discarded the moment it is no longer needed. Rather than keeping a user’s entire checkout history through time, most public libraries have historically kept a list only of the items currently checked out, deleting them as soon as the materials are returned.</p>



<p>This minimization was borne out of necessity, with libraries, especially in the pre-Internet era, of great interest to surveillance-minded authorities.</p>



<p>Computer science curriculums, however, have not historically emphasized this idea of minimization-at-all-costs. Quite the opposite, with data hoarding embraced as the path to limitless riches. After all, even if you pay for a service today you are still the product, as data exhaust becomes more valuable than subscription fees.</p>



<p>Privacy naturally conflicts with capability when it comes to data analytics. The more data and the higher resolution it is, the more insight algorithms can yield. Thus, the more companies prioritize privacy and actively delete everything they can and minimize the resolution on what they do have to collect, the less capability their analytics have to offer.</p>



<p>This represents a philosophical tradeoff. On the one hand, computer science students are taught to collect every datapoint they can at the highest resolution they can and to hoard it indefinitely. This extends all the way to things like diagnostic logging that often becomes an everything-or-nothing concept that has led even major companies to have serious security breaches. On the other hand, disciplines like library and information science emphasize privacy over capability, getting rid of data the moment it is safe to do so.</p>



<p>When it comes to government surveillance, data breaches, ethically questionable research, insider threats and other privacy issues, the less data companies keep about their users, the less information there is that can be misused and the lower their storage and analytic costs. If a company can make do with a terabyte of aggregated data rather than a petabyte of individual-level high resolution data,&nbsp;it can achieve considerable cost savings and move many of&nbsp;its batch analyses to real-time.</p>



<p>In the end, rather than enshrining mottos like “data is the new oil” into the vocabulary of tomorrow’s future technology leaders, perhaps we should emphasize “privacy first” and focus on how companies can absolutely minimize the data they collect to ensure a more privacy-protecting and less Orwellian future.</p>
<p>The post <a href="https://www.aiuniverse.xyz/computer-science-curriculums-must-emphasize-privacy-over-capability/">Computer Science Curriculums Must Emphasize Privacy Over Capability</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/computer-science-curriculums-must-emphasize-privacy-over-capability/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence: The Good, The Bad, and The Unfathomable</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-the-good-the-bad-and-the-unfathomable/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-the-good-the-bad-and-the-unfathomable/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 19 Sep 2017 06:49:36 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[future technology]]></category>
		<category><![CDATA[human race]]></category>
		<category><![CDATA[Machine intelligence]]></category>
		<category><![CDATA[technic transformations]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1191</guid>

					<description><![CDATA[<p>Source &#8211; shift.newco.co No stranger to controversy, a Tony Stark reincarnate — Elon Musk — came out with an ominous prediction recently. “Forget North Korea, AI will start World War III” read the CNN headline. <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-the-good-the-bad-and-the-unfathomable/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-the-good-the-bad-and-the-unfathomable/">Artificial Intelligence: The Good, The Bad, and The Unfathomable</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>shift.newco.co</strong></p>
<p id="2b35" class="graf graf--p graf-after--figure">No stranger to controversy, a Tony Stark reincarnate — Elon Musk — came out with an ominous prediction recently. “<em class="markup--em markup--p-em">Forget North Korea, AI will start World War III</em>” read the CNN headline. Elon Musk is not alone in fearing unintended consequences of the race to develop algorithms that we may or may not be able to control. Once a new technology is introduced it can’t be uninvented — Sam Harris points out in his viral TED talk. He argues that it’ll be impossible to halt the pace of progress, even if humankind could collectively make such a decision.</p>
<blockquote id="ba41" class="graf graf--pullquote graf-after--p"><p>The critics and cheerleaders of AI alike agree on one thing: intelligence explosion will change the world beyond recognition.</p></blockquote>
<figure id="6022" class="graf graf--figure graf-after--pullquote">
<div class="aspectRatioPlaceholder is-locked">
<div class="aspectRatioPlaceholder-fill"></div>
<div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*-ZS1LiSVJsaG70w7g4tzAA.png" data-width="900" data-height="699" data-action="zoom" data-action-value="1*-ZS1LiSVJsaG70w7g4tzAA.png" data-scroll="native"><canvas class="progressiveMedia-canvas js-progressiveMedia-canvas" width="75" height="57"></canvas><img decoding="async" class="progressiveMedia-image js-progressiveMedia-image" src="https://cdn-images-1.medium.com/max/1600/1*-ZS1LiSVJsaG70w7g4tzAA.png" data-src="https://cdn-images-1.medium.com/max/1600/1*-ZS1LiSVJsaG70w7g4tzAA.png" /></div>
</div><figcaption class="imageCaption">Elon Musk warning about the dangers of AI on Twitter</figcaption></figure>
<p id="ad35" class="graf graf--p graf-after--figure">While Bill Gates, Stephen Hawking and countless others are broadly on the same page with Musk and Harris, some of the leading thinkers recognize that AI, like any other technology, is value-neutral. Gunpowder, after all, was first used in fireworks.</p>
<p id="9bc8" class="graf graf--p graf-after--p">Ray Kurzweil argues that “<em class="markup--em markup--p-em">AI will be the pivotal technology in achieving [human] progress. We have a moral imperative to realize this promise while controlling the peril</em>.” And, in his view, humanity has ample time to develop ethical guidelines and regulatory standards.</p>
<blockquote id="4c74" class="graf graf--pullquote graf-after--p"><p><strong class="markup--strong markup--pullquote-strong"><em class="markup--em markup--pullquote-em">Making computers part of us, part of our bodies, is going to change our capabilities so much that one day, we will see our current selves as goldfish</em>.</strong></p></blockquote>
<p id="fe6d" class="graf graf--p graf-after--pullquote">As the world edges towards singularity, future technology is bound to enhance the human experience in some way, and it is up to us to make sure it is for the better.</p>
<p id="1bf2" class="graf graf--p graf-after--p">The critics and cheerleaders of AI alike agree on one thing: intelligence explosion will change the world beyond recognition. When thinking about the future, I found the metaphor offered by Vernor Vinge, on the Invisibilia podcast, especially stark: “<em class="markup--em markup--p-em">making computers part of us, part of our bodies, is going to change our capabilities so much that one day, we will see our current selves as goldfish</em>.” If this is a true extent of our expected AI-and-Tech-powered evolution, our contemporary norms and conventions go straight out of the window.</p>
<blockquote id="80f1" class="graf graf--pullquote graf-after--p"><p>Putting the <em class="markup--em markup--pullquote-em">war</em> and <em class="markup--em markup--pullquote-em">AI</em> in the same sentence, we anthropomorphize the latter.</p></blockquote>
<p id="9c6b" class="graf graf--p graf-after--pullquote">Even if the accurate predictions are a dud, shouldn’t we at least attempt to apply the prism of exponential technologies to review our basic assumptions, question fundamentals of human behavior, and scrutinize our societal organization? AI’s promise could be an apocalypse or eternal bliss or anything in between, but, as we speculate on the outcome, we are making a value judgment. And here we ought to recognize our susceptibility to the <em class="markup--em markup--p-em">projection bias. </em>It compels us to apply the present-day intellectual framing to ponder the future.</p>
<p id="a714" class="graf graf--p graf-after--p"><span class="markup--quote markup--p-quote is-other" data-creator-ids="8936d25e682b">Putting the <em class="markup--em markup--p-em">war</em> and <em class="markup--em markup--p-em">AI</em> in the same sentence, we anthropomorphize the latter. When we worry about the robots and machine-intelligence causing mass unemployment, we must recognize that such anxiety is only justified if human labor remains an economic necessity. When we say that the spiraling-out-of-control tech progress will create more <em class="markup--em markup--p-em">inequality</em>, we assume that the idea of private property, wealth, and money will survive the fourth-industrial revolution.</span></p>
<figure id="e919" class="graf graf--figure graf-after--p">
<div class="aspectRatioPlaceholder is-locked">
<div class="aspectRatioPlaceholder-fill"></div>
<div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*1eKzcblq9MFgx-m17pwByw.jpeg" data-width="800" data-height="510" data-action="zoom" data-action-value="1*1eKzcblq9MFgx-m17pwByw.jpeg" data-scroll="native"><canvas class="progressiveMedia-canvas js-progressiveMedia-canvas" width="75" height="47"></canvas><img decoding="async" class="progressiveMedia-image js-progressiveMedia-image" src="https://cdn-images-1.medium.com/max/1600/1*1eKzcblq9MFgx-m17pwByw.jpeg" data-src="https://cdn-images-1.medium.com/max/1600/1*1eKzcblq9MFgx-m17pwByw.jpeg" /></div>
</div>
</figure>
<p id="2a52" class="graf graf--p graf-after--figure">It’s an arduous task to define the fundamental terms, much less to question them. But, perhaps, playing out a couple of scenarios could prove a useful exercise is circumventing projection bias.</p>
<h4 id="4408" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">Competition &amp; Collaboration</strong></h4>
<p id="bbac" class="graf graf--p graf-after--h4">The <em class="markup--em markup--p-em">natural selection</em> is, at its core, a multidimensional competition of traits and behaviors. It manifests itself in a basic competitive instinct that humans are all too familiar with. Evolutionary psychology postulates that the driver of human behavior is a need to perpetuate one’s genes. So Homo Sapiens evolved competing for mates, fighting for resources to feed the offspring, all with a singular objective to maximize their genes’ chances to be passed on.</p>
<blockquote id="2ba4" class="graf graf--pullquote graf-after--p"><p>When the algorithms are better at decision-making than humans, and we surrender much of our autonomy to them, how will our <em class="markup--em markup--pullquote-em">competitive instinct</em> fare?</p></blockquote>
<p id="79d9" class="graf graf--p graf-after--pullquote">On the other hand, we are, according to Edward O. Wilson, “<em class="markup--em markup--p-em">one of only two dozen or so animal lines ever to evolve</em><em class="markup--em markup--p-em"> eusociality</em><em class="markup--em markup--p-em">, the next major level of biological organization above the organismic. There, group members across two or more generations stay together, cooperate, care for the young, and divide labor</em>…” In other words, we might have to attribute the stunning success of our species to the fine balance we’ve maintained between competition and cooperation instincts.</p>
<blockquote id="4b70" class="graf graf--pullquote graf-after--p"><p>What will be the point of <em class="markup--em markup--pullquote-em">resource competition</em> in the world of <em class="markup--em markup--pullquote-em">abundance</em>?</p></blockquote>
<p id="7698" class="graf graf--p graf-after--pullquote">Whether general machine intelligence is imminent or even achievable, the idea of post-scarcity economy is gaining ground. If and when the automation of pretty much everything delivers the world where human labor is redundant, what will be the wider ramifications for our value system and societal organization? When the algorithms are better at decision-makingthan humans, and we surrender much of our autonomy to them, how will our <em class="markup--em markup--p-em">competitive instinct</em> fare?</p>
<p id="9caa" class="graf graf--p graf-after--p">What will be the point of <em class="markup--em markup--p-em">resource competition</em> in the world of <em class="markup--em markup--p-em">abundance</em>? Is it possible that our instinct to compete slowly evaporates as a useful construct? Could we evolve to live without it? Unlike ants and bees that cooperate on the basis of rigid protocols, humans are spectacularly adaptable in our cooperation abilities. According to Yuval Harari, that’s what ultimately underpinned the rise of sapiens to dominate the Earth. Is it conceivable that the need to compete turns into an atavism as the technic transformations described by Kurzweil begin to materialize?</p>
<h4 id="c083" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">Economy</strong></h4>
<p id="1f9d" class="graf graf--p graf-after--h4">How can we be sure that the basic pillars of our economic thinking (e.g. private property, ownership, capital, wealth, etc.) will survive post-scarcity? 100 years from now, will anybody care for <em class="markup--em markup--p-em">labor productivity</em>? How relevant could our policies encouraging employment be when all of the humanity is freeriding on the “efforts” of the machines? What are we left with, when the basics such as supply and demand have been shuddered?</p>
<blockquote id="ecff" class="graf graf--pullquote graf-after--p"><p>If ownership is pointless and money is no longer a useful unit of exchange, how will we define status?</p></blockquote>
<p id="1ef2" class="graf graf--p graf-after--pullquote">To a gainfully employed person, today, a prospect of indefinite leisure might appear more of a curse than a blessing. This sentiment, viewed through the lens of natural selection, makes sense. The economic contribution by all able members of society would’ve been preferred to the mass pursuit of idleness. But should we be projecting the same trend into the future? What may sound like decadence and decay to us now, may be construed quite differently in the world no longer powered by the known economic forces.</p>
<p id="0cda" class="graf graf--p graf-after--p">The working assumption is that no matter what, someone will have to own the machines and pay for goods and services. Yet the idea of property and money is nothing more than social constructs that we all agreed on. If ownership is pointless and money is no longer a useful unit of exchange, how will we define status?</p>
<figure id="85c2" class="graf graf--figure graf-after--p">
<div class="aspectRatioPlaceholder is-locked">
<p id="2880" class="graf graf--p graf-after--pullquote">Certainly, the questions are plentiful and the answers are few. And I, for one, am in no position to offer concrete proposals or defend, admittedly, speculative arguments. The bottom line is that we are firmly on the path tosubvert the forces of evolution, which were, since the dawn of time, main drivers of our behavior. As political and religious dogmas have changed, the very basic economic principle remained — satisfying human needs and wants required human efforts. Those fundamental forces are clearly threatened by the accelerating pace of tech progress, singularity notwithstanding.</p>
<p id="4ac8" class="graf graf--p graf-after--p graf--trailing">The ideas presented here may sound utopian and naïve. And Elon Musk may as well be right: the invention of AI could spell the end of human race. It is humanity’s awesome responsibility, therefore, to design proper governance for artificial intelligence and think it through before we take a plunge. When contemplating the future, we must be cognizant of the limits of our understanding and thus make use of our imagination — a distinctly human trait, at least for the time being.</p>
</div>
</figure>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-the-good-the-bad-and-the-unfathomable/">Artificial Intelligence: The Good, The Bad, and The Unfathomable</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-the-good-the-bad-and-the-unfathomable/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>How artificial intelligence can define not destroy the future of work</title>
		<link>https://www.aiuniverse.xyz/how-artificial-intelligence-can-define-not-destroy-the-future-of-work/</link>
					<comments>https://www.aiuniverse.xyz/how-artificial-intelligence-can-define-not-destroy-the-future-of-work/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 08 Sep 2017 06:42:39 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[future technology]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1021</guid>

					<description><![CDATA[<p>Source &#8211; afr.com Work is a defining feature of our civilisation. We spend more time in our jobs than any other activity and the spoils of our labour <a class="read-more-link" href="https://www.aiuniverse.xyz/how-artificial-intelligence-can-define-not-destroy-the-future-of-work/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-can-define-not-destroy-the-future-of-work/">How artificial intelligence can define not destroy the future of work</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> afr.com</strong></p>
<p>Work is a defining feature of our civilisation. We spend more time in our jobs than any other activity and the spoils of our labour provide us the means to survive. It gives identity, status and purpose.</p>
<p>So we can be forgiven if we get nervous when our jobs are threatened. And there has never been a clearer threat than increasing automation through robotics and AI.</p>
<p>I am of the camp that ultimately, in about 15 years, we will all be far better off, our notion of work evolved and with it our lives. Yet the transition up to that point is going to be very harsh for most, leaving plenty of scars of inequality and alienation on our society.</p>
<p>Talk of universal basic income, the taxation of robot output and a demotion of human capital to that of mere data suppliers is real. How our innovators introduce the coming disruption will shape its impact. How we manage this transition as a people will define us all.</p>
<div class="cq-article-content-paras section">
<h2>Where AI is better than us</h2>
</div>
<div class="cq-article-content-paras section">
<p>Eventually everything given enough progress and time.</p>
<p>But in the nearer term computers surpass us in tasks involving collecting information, structuring and digesting huge amounts of it quickly, understanding the correlations within, surfacing probabilities and optimising for results.</p>
<p>Jobs most vulnerable are ones where the information analysis above is done repeatedly and with little variation. Lawyers drafting contracts, accountants doing tax returns, travel agents planning holidays, IT staff running security checks, marketers buying ad spots, radiologists examining X-rays, journalists reporting news, etc.</p>
</div>
<div class="cq-article-content-paras section">
<p>Humans won&#8217;t be able to compete with the speed, efficiency and scale at which their computer counterparts will deliver.</p>
</div>
<div class="cq-article-content-paras section">
<p>Robots have already proven themselves masters of the assembly line and will continue to move up the chain yet at a slower pace than their pure-software brethren.</p>
<p>Jobs at risk range from monitoring power lines, securing borders, farming crops, mining, driving taxis, trucking or shipping goods, etc. Autonomous drones and vehicles will win us over with pinpoint accuracy, 24/7 reliability, enhanced functionality and safety.</p>
<div class="cq-article-content-paras section">
<p>Underlying it all is cost. Automated systems, whether hardware or software, will just beat humans to the bottom line to the point where including a person would be equivalent to holding on to your telephone operator to direct your phone calls.</p>
<h2>Where we are better than AI</h2>
</div>
<div class="cq-article-content-media section"></div>
<div class="cq-article-content-paras section">
<p>Areas that are highly complex for AI and robotics to gain a medium term edge on revolve around unique human-to-human interactions.</p>
<p>Things like knowing when not to speak, listening with empathy, exchanging a look or a smile or delivering a well-executed joke.</p>
<p>These strengths and the freeing up of labour from other industries could result in economies shifting greater monetary value to historically undervalued work, such as social services, aged care, volunteering and child development – relative to the value it brings to society, and which cannot be easily automated.</p>
<p>I&#8217;m not suggesting that stockbrokers need to transition to caring for the elderly, but within each job there is a humanistic element that will increase in importance relative to the pure production role that can be automated.</p>
<h2>Education&#8217;s role in a successful transition</h2>
<p>Cookie-cutter learning is automation. If students are to find new roles in the workforce of the future they must be nurtured to individualise their skill sets and importantly develop a framework of continued learning throughout their lives.</p>
<p>Fortunately the same technology that brings us personalised recommendations on Netflix and Amazon is finding its way into education. Crafting a unique journey for each student, chaperoned by a combination of teachers and machine learning designed to surface relevant content and topics to explore.</p>
<p>To view the growing trend of youths switching jobs as being fickle is a mistake. Properly executed and in partnership with ongoing education throughout working life it is the agility necessary to compete in the rapidly evolving world they are inheriting.</p>
<div class="cq-article-content-paras section">
<h2>Fluid labor movement via new platforms</h2>
<p>There is a role for a more sophisticated employment platform that better matches increasingly diversified candidates with jobs.</p>
<p>Lifelong careers in one role could become a thing of the past. Work might shift to specific tasks and projects where the best individual for the job can be sourced quickly and efficiently.</p>
<p>We are seeing the initial signs of a shift to more project-based work with the gig economy; with on-demand workforce such as Uber.</p>
<p>Yet while they carry a negative connotation as low-skilled and low-paid a similar framework could be successfully applied to highly skilled tasks.</p>
<p>Doctors are already being decentralised away from a day in the clinic to telemedicine and pro-active mobile health that has no geographic boundary.</p>
<p>An American specialist in gut health could spend the morning at home treating patients in Cape Town, be pulled in to consult on an Australian government microbiome initiative in the afternoon and finish her day working with Nestle on a healthy alternative to chocolate.</p>
<p>A fluid labour force need not translate to volatile work or uncertainty when your next pay cheque will arrive. Downtimes can be matched with short periods of education to keep increasing our value in the labour market.</p>
</div>
<div class="cq-article-content-paras section">
<p>Similar platforms can improve supply and demand between organisations too. Matching clients with vendors and eliminating the wastage they spend seeking out, piloting and contracting suppliers or buyers.</p>
<p>With respect to powering such platforms, machines could have an enabling impact on our labour market rather than a detrimental one.</p>
<h2>Enhancing ourselves with technology</h2>
<p>It would be remiss not to mention upgrading our human capabilities directly with future technology to keep up with the pace of advancements.</p>
<p>A wealth of research and development has been taking place from memory enhancement to brain internet connectivity and DNA manipulation.</p>
<p>In a way we are already merging with our technology, with our smartphones like appendages and VR headsets transporting us into digital worlds.</p>
<p>Yet legislators and the public have little understanding of the nascent technology and for now it remains on the outskirts of regulation or public guidance.</p>
<p>Without the right frameworks, leaving human enhancement up to the market will surely be the quickest way to widen the gap between the haves and the have nots.</p>
</div>
<div class="cq-article-content-paras section">
<p>Rapid progress is coming and we must not underestimate its scale and impact. As we enter unfamiliar territory we retain the power to shape the opportunities and mitigate the pitfalls ahead.</p>
</div>
<p>&nbsp;</p>
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-can-define-not-destroy-the-future-of-work/">How artificial intelligence can define not destroy the future of work</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-artificial-intelligence-can-define-not-destroy-the-future-of-work/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>How Artificial Intelligence Is Changing Storytelling</title>
		<link>https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/</link>
					<comments>https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 13 Jul 2017 12:04:20 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[future technology]]></category>
		<category><![CDATA[intelligent devices]]></category>
		<category><![CDATA[IT development]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=36</guid>

					<description><![CDATA[<p>Source &#8211; huffingtonpost.com Artificial Intelligence or AI can create dynamic content. Let’s apply best use cases to our work as storytellers. At this year’s Wimbledon Tennis Tournament, for <a class="read-more-link" href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/">How Artificial Intelligence Is Changing Storytelling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> huffingtonpost.com</strong></p>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Artificial Intelligence or AI can create dynamic content. Let’s apply best use cases to our work as storytellers.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>At this year’s Wimbledon Tennis Tournament, for example, IBM’s artificial intelligence platform, Watson, had a major editorial role — analyzing and curating the best moments and data points from the matches, producing “Cognitive Highlight” videos, tagging relevant players and themes, and sharing the content with Wimbledon’s global fans.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Intel just announced a collaboration with the International Olympic Committee (IOC) that will bring VR, 360 replay technology, drones and AI to future Olympic experiences. In a recent press release Intel notes, “The power to choose what they want to see and how they want to experience the Olympic Games will be in the hands of the fans.”</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>In the context of development, future technology will change the way we interact with global communities. Researchers at Microsoft are experimenting with a new class of machine-learning software and tools to embed AI onto tiny intelligent devices. These “edge devices” don’t depend on internet connectivity, reduce bandwidth constraints and computational complexity, and limit memory requirements yet maintain accuracy, speed, and security — all of which can have a profound effect on the development landscape. Specific projects focus on small farmers in poor and developing countries, and on precision wind measurement and prediction.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Microsoft’s technology could help push the smarts to small cheap devices that can function in rural communities and places that are not connected to the cloud. These innovations could also make “the Internet of Things devices cheaper, making it easier to deploy them in developing countries,” according to a leading Microsoft researcher.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>But the fact is, the non-western setting is currently the greatest challenge for AR/VR platforms. Wil Monte, founder and Director of Millipede, one of our SecondMuse collaborators says currently VR/AR platforms are completely hardware reliant, and being a new technology, often require a specification level that is cost-prohibitive to many.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Monte says labs like Microsoft pushing the processing capability of machine learning, while crunching the hardware requirements will mean that the implementation of the technologies will soon be much more feasible in a non-western or developing setting. He says development agencies should be empowered to push, optimise and democratise the technology so it has as many use cases as possible, therefore enabling storytellers to deploy much needed content to more people, in different settings.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>“From our experience in Tonga, I learned that while the delivery of content via AR/VR is especially compelling, the infrastructure restraints means that we need to ‘hack’ the normal deployment and distribution strategies to enable the tech to have the furthest reach. With Millipede’s lens applied, this would be immersive and game-based storytelling content, initially delivered on touch devices but also reinforced through a physical board or card game to enable as much participation in the story as possible,” Monte says.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>According to Ali Khoshgozaran, Co-founder and CEO of Tilofy, an AI-powered trend forecasting company based in Los Angeles, content creation is one of the most exciting segments where technology can work hand in hand with human creativity to apply more data-driven, factual and interactive context to a story. For example, at Tilofy, they automatically generate insights and context behind all their machine generated trend forecasts. “When it comes to accessing knowledge and information, issues of digital divide, low literacy, low internet penetration rate and poor connectivity still affect hundreds of millions of people living in rural and underdeveloped communities all around the world,” Khoshgozaran says.</p>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>“This presents another great opportunity for technology to bridge the gap and bring the world closer. Microsoft use of AI in Skype’s real-time translator service has allowed people from the furthest corners of the world to connect — even without understanding each other’s native language — using a cellphone or a landline. Similarly, Google’s widely popular translate service has opened a wealth of content originally created in one language to many others. Due to its constant improvements in quality and number of languages covered, Google Translate might soon enhance or replace human-centric efforts like project Lingua by auto translating trending news at scale,” Khoshgozaran says.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Furthermore, technologies like the Google Tango and Apple ARKit can provide new opportunities says Ali Fardinpour, Research Scientist in Learning and Assessment via Augmented/Virtual Reality at CingleVue International in Australia. “The opportunity to bring iconic characters out of the literature and history and bring them to every kid’s mobile phone or tablet and educate them on important issues and matters in life can be one of the benefits of Augmented Reality Storytelling.”</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Fardinpour says this kind of technology can substitute for the lack of mainstream media coverage or misleading coverage to educate kids and even adults on the current development projects, “I am sure there are a lot of amazing young storytellers who would love the opportunity to create their own stories to tell to inspire their communities. And this is where AR/AI can play an important role.”</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>A profound view of the future of storytellers comes from Tash Tan, Co-Founder of Sydney based Digital Company S1T2. Tan is leading one of our immersive storytelling projects in the South Pacific called LAUNCH Legends aimed at addressing issues of healthy eating and nutrition through the use of emerging, interactive technologies. “As storytellers it is important to consider that perhaps we are one step closer to creating a truly dynamic story arch with Artificial intelligence. This means that stories won’t be predetermined or pre-authored, or curated but instead they will be emerging and dynamically generated with every action or consequence,” Tan says, “If we can create a world that is intimate enough and subsequently immersive enough we can perhaps teach children through the best protagonist of all — themselves.”</p>
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/">How Artificial Intelligence Is Changing Storytelling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
	</channel>
</rss>
