<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>IT development Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/it-development/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/it-development/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 22 Nov 2019 06:02:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Unlocking the potential of smart cameras with deep learning</title>
		<link>https://www.aiuniverse.xyz/unlocking-the-potential-of-smart-cameras-with-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/unlocking-the-potential-of-smart-cameras-with-deep-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 22 Nov 2019 06:02:55 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[DevOps Technology]]></category>
		<category><![CDATA[IT development]]></category>
		<category><![CDATA[software developers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5331</guid>

					<description><![CDATA[<p>Source:-itproportal.comThis article will examine the obstacles involved in trying to detect moving objects and how smart cameras and deep learning can correct them. An object in motion <a class="read-more-link" href="https://www.aiuniverse.xyz/unlocking-the-potential-of-smart-cameras-with-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/unlocking-the-potential-of-smart-cameras-with-deep-learning/">Unlocking the potential of smart cameras with deep learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-itproportal.com<br>This article will examine the obstacles involved in  trying to detect moving objects and how smart cameras and deep learning  can correct them.</p>



<p>An object in motion looks fundamentally different from an object at 
rest — especially to a computer. To get a better idea of this concept, 
let’s imagine a film strip of a sprinter running: The person and pose in
 one frame look drastically different from the next frame, right?</p>



<p>Making
 sense of dynamic objects is taking on new importance as cities begin 
incorporating IoT devices like smart cameras to streamline municipal 
life. The town of Yuma, Arizona, is a great example of this. The city 
recently installed cameras on streetlights that can detect when cars, 
bicycles, and pedestrians travel through intersections, and it uses that
 data to optimise signal switching.</p>



<p>Athena
 Security is pioneering another interesting application of moving-video 
analysis: The company sells software that uses artificial intelligence 
to detect when people are fighting, fleeing, or lurking to determine 
whether crimes are being committed (or are imminent). Unsurprisingly, 
everyone from municipal police departments to Fortune 500 companies is 
interested in this AI application.</p>



<p>The applications are 
endless for IoT devices like smart cameras that analyse moving video. 
Fortunately, this technology has now reached a point where almost 
anything is possible.</p>



<ul class="wp-block-list"><li>Vulnerabilities in smart IP cameras expose users to privacy, security risks</li></ul>



<h2 class="wp-block-heading" id="solving-mysteries-in-moving-video">Solving mysteries in moving video</h2>



<p>Using
 computers to analyse video isn’t exactly a new concept. However, 
there’s one problem hampering the development of video analysis: Moving 
video is full of dynamic variables that can confuse even the smartest 
computers.</p>



<p>Objects look completely different in low light compared
 to bright light, for instance, which can lead to false analyses. 
Perspective offers up another challenge: Think about how different a car
 looks when it’s traveling parallel and then perpendicular to a relative
 point.</p>



<p>Other issues that might be confusing for a machine’s 
analysis of video include moving shadows, complex backgrounds, obscured 
objects, unexpected movements, and a camera’s technical limitations. For
 all these reasons, moving-video analysis has historically had a lot of 
potential — but not too many practical applications.</p>



<p>That’s all 
changing with advances in deep learning systems, which are often 
referred to as neural networks. Today, computing has advanced to a point
 where systems can learn from past data to get better at understanding 
future data.</p>



<p>The ability to learn and adapt is crucial for 
computers that need to make sense of the ever-changing data coming from 
moving video — and different combinations of neural networks could 
provide a solution. With convolutional neural networks, for example, 
computers model space in three dimensions to better predict the 
trajectory of objects within that space.</p>



<p>Deep neural networks can 
help cancel out background images so cameras can focus explicitly on 
moving objects. There are also recurrent neural networks that excel at 
pattern recognition. Each of these networks has strengths and 
weaknesses, but using them in the right combination makes moving-video 
analysis highly accurate in almost any setting.</p>



<ul class="wp-block-list"><li>Hackers could spy on you through your smart camera</li></ul>



<h2 class="wp-block-heading" id="connectivity-and-the-future-of-smart-cameras">Connectivity and the future of smart cameras</h2>



<p>My
 company recently worked on a project that demonstrates how far 
connected devices like smart cameras have come, as well as the 
challenges they still face. For this particular project, a client in 
Israel asked us to develop a program to detect kicking motions in live 
soccer matches televised at 20 frames per second.</p>



<p>From the start, 
the project endured two obstacles: First, our team had to distinguish 
between an actual “kick” and a swinging leg motion that looked quite 
similar. Second, we needed to do that at 20 frames per second. That’s a 
higher resolution than most surveillance footage, and it’s packed with 
much more data to analyse.</p>



<p>Initially, we tried developing an 
algorithm that would create two “bounding boxes” around both a player’s 
foot and the soccer ball, and then register when those two boxes met. In
 practice, however, detecting kicks became extremely inaccurate when 
players were clumped together (and we know that happens a lot in 
soccer).</p>



<p>The solution? Tweak the deep learning element. We 
adjusted how the underlying neural networks were configured so we could 
accelerate object detection. Then, we created a data set using 500 
frames taken from 20 seconds of a soccer match. Our team manually 
annotated this data to identify kicks and “non-kicks,” and we used it to
 “teach” our algorithms to make that distinction.</p>



<p>Our program 
eventually identified 58 per cent of real kicks; improving the numbers 
was possible through feeding the program data from more matches and 
different sets of players.</p>



<p>That’s because it proved that, with the
 right configurations, deep learning can make sense of all the 
complexities within the moving video within the connected device. While 
achieving these ends might take a ton of reference data, the technology 
has proved its usefulness and has finally made moving-video analysis a 
reality.</p>



<p>This kind of technology can be applied in many areas, 
from IoT surveillance systems to self-driving cars. And if one thing is 
certain, there’s not much further to go before moving-video analysis 
begins transforming our lives.</p>
<p>The post <a href="https://www.aiuniverse.xyz/unlocking-the-potential-of-smart-cameras-with-deep-learning/">Unlocking the potential of smart cameras with deep learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/unlocking-the-potential-of-smart-cameras-with-deep-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep-Learning Framework SINGA Graduates to Top-Level Apache Project</title>
		<link>https://www.aiuniverse.xyz/deep-learning-framework-singa-graduates-to-top-level-apache-project/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-framework-singa-graduates-to-top-level-apache-project/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 Nov 2019 12:16:20 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Apache-software]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[IT development]]></category>
		<category><![CDATA[IT leaders]]></category>
		<category><![CDATA[software development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5286</guid>

					<description><![CDATA[<p>Source:-infoq.com The Apache Software Foundation (ASF) recently announced that SINGA, a framework for distributed deep-learning, has graduated to top-level project (TLP) status, signifying the project&#8217;s maturity and stability. SINGA has already been <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-framework-singa-graduates-to-top-level-apache-project/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-framework-singa-graduates-to-top-level-apache-project/">Deep-Learning Framework SINGA Graduates to Top-Level Apache Project</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-infoq.com<br></p>



<p>The Apache Software Foundation (ASF) recently announced that SINGA, a framework for distributed deep-learning, has graduated to top-level project (TLP) status, signifying the project&#8217;s maturity and stability. SINGA has already been adopted by companies in several sectors, including banking and healthcare.</p>



<p>Originally developed at the National University of Singapore, SINGA joined ASF&#8217;s incubator in March 2015. SINGA provides a framework for distributing the work of training deep-learning models across a cluster of machines, in order to reduce the time needed to train the model. In addition to its use as a platform for academic research, SINGA has been used in commercial applications by Citigroup and CBRE, as well as in several health-care applications, including an app to aid patients with pre-diabetes.</p>



<p>The success of deep-learning models has been driven by the use of very large datasets, such as ImageNet with hundreds of thousands of images, and complex models with millions of parameters. Google&#8217;s BERT natural-language model contains 300 million parameters and is trained on nearly 3 billion words. However, this training often requires hours, if not days, to complete. To speed up this process, researchers have turned to parallel computing, which distributes the work across a cluster of machines. According to Professor Beng Chin Ooi, leader of the research group that developed SINGA:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>It is essential to scale deep learning via distributed computing as&#8230;deep learning models are typically large and trained over big datasets, which may take hundreds of days using a single GPU.</p></blockquote>



<p>There are two broad parallelism strategies for distributed deep-learning: data parallelism, where multiple machines work on different subsets of the input data, and model parallelism, where multiple machines train different sections of the neural-network model. SINGA supports both of these strategies, as well as a combination of the two. These strategies do introduce some communication and synchronization overhead, required to coordinate the work among the machines in the cluster. SINGA implements several optimizations to minimize this overhead.</p>



<p>Acceptance as a top-level project means that SINGA has passed several milestones related to software quality and community, which in theory makes the software more attractive as a solution. However, one possible barrier to adoption is that instead of building upon an existing API for modeling neural networks, such as Keras, SINGA&#8217;s designers chose to implement their own. By contrast, the Horovod framework open-sourced by Uber allows developers to port existing models written for the two most popular deep-learning frameworks, TensorFlow and PyTorch. PyTorch in particular is the framework used in a majority of recent research papers.<br><br>ASF has several other top-level distributed-data processing projects that support machine-learning, including Spark and Ignite. Unlike these, SINGA is designed specifically for deep-learning&#8217;s large models. ASF is also home to MXNet, a deep-learning framework similar to TensorFlow and PyTorch, which is still in incubator status. AWS touted MXNet as its framework of choice in late 2016, but MXNet still hasn&#8217;t achieved widespread popularity, hovering at just under 2% in KDNugget&#8217;s polls.</p>



<p>Apache SINGA version 2.0 was released in April, 2019. The source code is available on GitHub, and a list of open issues can be tracked in SINGA&#8217;s Jira project. According to ASF, upcoming features include &#8220;SINGA-lite for deep learning on edge devices with 5G, and SINGA-easy for making AI usable by domain experts (without deep AI background).</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-framework-singa-graduates-to-top-level-apache-project/">Deep-Learning Framework SINGA Graduates to Top-Level Apache Project</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-framework-singa-graduates-to-top-level-apache-project/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google releases source code of new on-device machine learning solutions</title>
		<link>https://www.aiuniverse.xyz/google-releases-source-code-of-new-on-device-machine-learning-solutions/</link>
					<comments>https://www.aiuniverse.xyz/google-releases-source-code-of-new-on-device-machine-learning-solutions/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 15 Nov 2019 06:06:07 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Android-ecosystem]]></category>
		<category><![CDATA[IT development]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Source Code]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5184</guid>

					<description><![CDATA[<p>Source:-zdnet.comMobileNetV3 and MobileNetEdgeTPU have been released to the open source community. Google has opened up the source code of two machine learning (ML) on-device systems, MobileNetV3 and <a class="read-more-link" href="https://www.aiuniverse.xyz/google-releases-source-code-of-new-on-device-machine-learning-solutions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-releases-source-code-of-new-on-device-machine-learning-solutions/">Google releases source code of new on-device machine learning solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-zdnet.com<br>MobileNetV3 and MobileNetEdgeTPU have been released to the open source community.</p>



<p>Google has opened up the source code of two machine learning (ML) 
on-device systems, MobileNetV3 and MobileNetEdgeTPU, to the open source 
community. </p>



<p>In a blog post,  software and silicon engineers Andrew Howard and Suyog Gupta from  Google Research said on Wednesday that both the source code and  checkpoints for MobileNetV3, as well as the Pixel 4 Edge TPU-optimized  counterpart MobileNetEdgeTPU, are now available. </p>



<h3 class="wp-block-heading">
		Featured
	</h3>



<ul class="wp-block-list"><li>                         Best Black Friday 2019 deals: Business Bargain Hunter&#8217;s top picks                     </li><li>                         Black Friday 2019: Tools, tips, and tricks to save you money                     </li><li>                         Android ecosystem is not ready for adolescent Pixel 4                     </li><li>                         Microsoft Ignite postmortem: Cutting through the complexity                     </li></ul>



<p>On-device ML 
applications for responsive intelligence have been designed with 
power-limited devices in mind, including our smartphones, tablets, and 
Internet of Things (IoT) electronics.&nbsp;</p>



<p><strong>See also: </strong>Google updates CallJoy phone agent with customizable AI features </p>



<p>Google  says the demand for mobile intelligence has prompted research into  algorithmically-efficient neural network models and hardware &#8220;capable of  performing billions of math operations per second while consuming only a  few milliwatts of power,&#8221; such as in the case of the Google Pixel 4&#8217;s Pixel Neural Core.  </p>



<p>The latest MobileNet offerings include improvements to architectural 
design, speed, and accuracy, Google says. On mobile CPUs, users can 
expect MobileNetV3 to run at double the speed of MobileNetV2, bolstered 
through AutoML and NetAdapt, the latter of which has sliced away 
under-utilized activation channels.&nbsp; </p>



<p><strong>CNET: </strong>Huawei ban: Full timeline as Trump&#8217;s tech chief slams countries working with Chinese company </p>



<p>A new activation function called hard-swish (h-swish) 
has also been implemented to improve functionality on mobile devices and
 reduce the risk of bottlenecks. Overall latency has been decreased by 
15 percent and object detection latency has been reduced by 25 percent 
in comparison to MobileNetV2. </p>



<p>The MobileNetEdgeTPU model &#8212; 
similar to the Edge TPU in Coral products but tweaked for the camera 
features in Pixel 4 &#8212; now also has increased accuracy in comparison to 
earlier versions, while reducing both runtime and power requirements.&nbsp; </p>



<p>Google
 did not set out to reduce the power demands of this model, but when 
compared to the basic MobileNetV3, MobileNetEdgeTPU consumes 50 percent 
less juice. </p>



<p><strong>TechRepublic: </strong>IBM social engineer easily hacked two journalists&#8217; information </p>



<p>MobileNetV3 and MobileNetEdgeTPU code can now be accessed from the MobileNet GitHub repository.  </p>



<p>Developers  can also pick up a copy of open source implementation for MobileNetV3  and MobileNetEdgeTPU object detection from the Tensorflow Object Detection API page, and DeepLab is hosting the open source implementation for MobileNetV3 semantic </p>
<p>The post <a href="https://www.aiuniverse.xyz/google-releases-source-code-of-new-on-device-machine-learning-solutions/">Google releases source code of new on-device machine learning solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-releases-source-code-of-new-on-device-machine-learning-solutions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The vital role of humans in machine learning</title>
		<link>https://www.aiuniverse.xyz/the-vital-role-of-humans-in-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/the-vital-role-of-humans-in-machine-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 25 Nov 2017 05:45:37 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Humans intelligence]]></category>
		<category><![CDATA[IT development]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[media intelligence]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1774</guid>

					<description><![CDATA[<p>Source &#8211; mediaupdate.co.za Machine learning allows the algorithms that software and systems use to evolve as they process new data. Contrary to what many people think, humans play <a class="read-more-link" href="https://www.aiuniverse.xyz/the-vital-role-of-humans-in-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-vital-role-of-humans-in-machine-learning/">The vital role of humans in machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>mediaupdate.co.za</strong></p>
<h6 id="ctl00_ContentPlaceHolder1_divBodyIntro" class="article-intro">Machine learning allows the algorithms that software and systems use to evolve as they process new data. Contrary to what many people think, humans play an integral role in this process and in applying machine learning-driven solutions to real-world problems.</h6>
<p>Machine learning allows systems and AI engines to automatically learn and improve from experience. It is used to improve existing data processing systems or to create new software solutions.</p>
<p>Humans are closely involved in every step of producing technology that relies on machine learning. <em>media update</em>unpacks the end-to-end role that humans play in machine learning, and how humans and AI complement one another.</p>
<p><strong>Here’s a look at the numerous departments within a company that contribute to developing solutions powered by machine learning:</strong></p>
<h3>Sales and customer care teams</h3>
<p>AI is currently a buzzword, with many companies experimenting with the capabilities of this ever-evolving technology. But, AI-powered solutions will only survive in the marketplace if they are purpose-built for a customer need, and that is why sales and customer care teams have to be involved in the development of the product.</p>
<blockquote><p><em>&#8220;AI-powered solutions will only survive in the marketplace if they are purpose-built for a customer need.&#8221;</em></p></blockquote>
<p>Salespeople and customer support teams deal with customers and clients, which means they are ideally positioned to identify specific consumer needs. This information is key to developing viable products, services, or solutions. This is especially true for developers of machine learning-driven solutions.</p>
<p>The role of these teams does not end at this point. Customers provide feedback on the products or solutions, and these insights can be used to improve the company&#8217;s offerings.</p>
<h3>Developers, experts, and knowledge engineers</h3>
<p>Once the clients’ needs have been established, IT development teams create the algorithms that form the basis of the machine learning system.</p>
<p>They gather relevant data from a number of sources inside and outside their company. Information and knowledge about the data are acquired from a subject matter expert, who is a specialist in the field related to the solution. Knowledge engineers enter this information into a knowledge base.</p>
<blockquote><p><em>&#8220;Information and knowledge about the data are acquired from a subject matter expert, who is a specialist in the field related to the solution.&#8221;</em></p></blockquote>
<p>An engine then creates models based on the data in the knowledge base. The development team evaluates the model that the engine has created and the results of new data it processes. They make necessary adjustments to the engine so that it processes the data correctly.</p>
<p>With this done, the technology is ready to be applied. It could be used in new software aimed at customers, such as automatic sentiment analysis of social media posts for media intelligence. Companies might apply the engine to their own internal systems to improve their data processing capabilities.</p>
<h3>Marketers and creatives</h3>
<p>Most solutions that make use of machine learning need a customer interface. User experience designers and graphic artists contribute to creating the navigation and design of this user-friendly platform.</p>
<p>With the machine learning solution in place, and working correctly, marketers brand the technology and find ways to make it attractive to customers. Machine learning-based offerings that are customer facing can fail without an effective marketing team who fully understand the capabilities of the solution.</p>
<h3>Clients</h3>
<p>Clients and customers are key to any business, and this is no different in the development of AI technology. Once clients are using the machine learning software, their feedback is vital to the development of the technology. Any shortcomings or errors they experience can help the developers improve the solution.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-vital-role-of-humans-in-machine-learning/">The vital role of humans in machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-vital-role-of-humans-in-machine-learning/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>How Artificial Intelligence Is Changing Storytelling</title>
		<link>https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/</link>
					<comments>https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 13 Jul 2017 12:04:20 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[future technology]]></category>
		<category><![CDATA[intelligent devices]]></category>
		<category><![CDATA[IT development]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=36</guid>

					<description><![CDATA[<p>Source &#8211; huffingtonpost.com Artificial Intelligence or AI can create dynamic content. Let’s apply best use cases to our work as storytellers. At this year’s Wimbledon Tennis Tournament, for <a class="read-more-link" href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/">How Artificial Intelligence Is Changing Storytelling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> huffingtonpost.com</strong></p>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Artificial Intelligence or AI can create dynamic content. Let’s apply best use cases to our work as storytellers.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>At this year’s Wimbledon Tennis Tournament, for example, IBM’s artificial intelligence platform, Watson, had a major editorial role — analyzing and curating the best moments and data points from the matches, producing “Cognitive Highlight” videos, tagging relevant players and themes, and sharing the content with Wimbledon’s global fans.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Intel just announced a collaboration with the International Olympic Committee (IOC) that will bring VR, 360 replay technology, drones and AI to future Olympic experiences. In a recent press release Intel notes, “The power to choose what they want to see and how they want to experience the Olympic Games will be in the hands of the fans.”</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>In the context of development, future technology will change the way we interact with global communities. Researchers at Microsoft are experimenting with a new class of machine-learning software and tools to embed AI onto tiny intelligent devices. These “edge devices” don’t depend on internet connectivity, reduce bandwidth constraints and computational complexity, and limit memory requirements yet maintain accuracy, speed, and security — all of which can have a profound effect on the development landscape. Specific projects focus on small farmers in poor and developing countries, and on precision wind measurement and prediction.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Microsoft’s technology could help push the smarts to small cheap devices that can function in rural communities and places that are not connected to the cloud. These innovations could also make “the Internet of Things devices cheaper, making it easier to deploy them in developing countries,” according to a leading Microsoft researcher.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>But the fact is, the non-western setting is currently the greatest challenge for AR/VR platforms. Wil Monte, founder and Director of Millipede, one of our SecondMuse collaborators says currently VR/AR platforms are completely hardware reliant, and being a new technology, often require a specification level that is cost-prohibitive to many.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Monte says labs like Microsoft pushing the processing capability of machine learning, while crunching the hardware requirements will mean that the implementation of the technologies will soon be much more feasible in a non-western or developing setting. He says development agencies should be empowered to push, optimise and democratise the technology so it has as many use cases as possible, therefore enabling storytellers to deploy much needed content to more people, in different settings.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>“From our experience in Tonga, I learned that while the delivery of content via AR/VR is especially compelling, the infrastructure restraints means that we need to ‘hack’ the normal deployment and distribution strategies to enable the tech to have the furthest reach. With Millipede’s lens applied, this would be immersive and game-based storytelling content, initially delivered on touch devices but also reinforced through a physical board or card game to enable as much participation in the story as possible,” Monte says.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>According to Ali Khoshgozaran, Co-founder and CEO of Tilofy, an AI-powered trend forecasting company based in Los Angeles, content creation is one of the most exciting segments where technology can work hand in hand with human creativity to apply more data-driven, factual and interactive context to a story. For example, at Tilofy, they automatically generate insights and context behind all their machine generated trend forecasts. “When it comes to accessing knowledge and information, issues of digital divide, low literacy, low internet penetration rate and poor connectivity still affect hundreds of millions of people living in rural and underdeveloped communities all around the world,” Khoshgozaran says.</p>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>“This presents another great opportunity for technology to bridge the gap and bring the world closer. Microsoft use of AI in Skype’s real-time translator service has allowed people from the furthest corners of the world to connect — even without understanding each other’s native language — using a cellphone or a landline. Similarly, Google’s widely popular translate service has opened a wealth of content originally created in one language to many others. Due to its constant improvements in quality and number of languages covered, Google Translate might soon enhance or replace human-centric efforts like project Lingua by auto translating trending news at scale,” Khoshgozaran says.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Furthermore, technologies like the Google Tango and Apple ARKit can provide new opportunities says Ali Fardinpour, Research Scientist in Learning and Assessment via Augmented/Virtual Reality at CingleVue International in Australia. “The opportunity to bring iconic characters out of the literature and history and bring them to every kid’s mobile phone or tablet and educate them on important issues and matters in life can be one of the benefits of Augmented Reality Storytelling.”</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Fardinpour says this kind of technology can substitute for the lack of mainstream media coverage or misleading coverage to educate kids and even adults on the current development projects, “I am sure there are a lot of amazing young storytellers who would love the opportunity to create their own stories to tell to inspire their communities. And this is where AR/AI can play an important role.”</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>A profound view of the future of storytellers comes from Tash Tan, Co-Founder of Sydney based Digital Company S1T2. Tan is leading one of our immersive storytelling projects in the South Pacific called LAUNCH Legends aimed at addressing issues of healthy eating and nutrition through the use of emerging, interactive technologies. “As storytellers it is important to consider that perhaps we are one step closer to creating a truly dynamic story arch with Artificial intelligence. This means that stories won’t be predetermined or pre-authored, or curated but instead they will be emerging and dynamically generated with every action or consequence,” Tan says, “If we can create a world that is intimate enough and subsequently immersive enough we can perhaps teach children through the best protagonist of all — themselves.”</p>
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/">How Artificial Intelligence Is Changing Storytelling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-artificial-intelligence-is-changing-storytelling/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
	</channel>
</rss>
