<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>smartphone Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/smartphone/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/smartphone/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 13 Mar 2021 06:58:35 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Using Artificial Intelligence to Generate 3D Holograms in Real-Time on a Smartphone</title>
		<link>https://www.aiuniverse.xyz/using-artificial-intelligence-to-generate-3d-holograms-in-real-time-on-a-smartphone/</link>
					<comments>https://www.aiuniverse.xyz/using-artificial-intelligence-to-generate-3d-holograms-in-real-time-on-a-smartphone/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 13 Mar 2021 06:58:32 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[3D]]></category>
		<category><![CDATA[generate]]></category>
		<category><![CDATA[Holograms]]></category>
		<category><![CDATA[real-time]]></category>
		<category><![CDATA[smartphone]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13469</guid>

					<description><![CDATA[<p>Source &#8211; https://scitechdaily.com/ A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone. Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One <a class="read-more-link" href="https://www.aiuniverse.xyz/using-artificial-intelligence-to-generate-3d-holograms-in-real-time-on-a-smartphone/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/using-artificial-intelligence-to-generate-3d-holograms-in-real-time-on-a-smartphone/">Using Artificial Intelligence to Generate 3D Holograms in Real-Time on a Smartphone</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://scitechdaily.com/</p>



<p><strong>A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.</strong></p>



<p>Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.</p>



<p>Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.</p>



<p>Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.</p>



<p>“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” says Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”</p>



<p>Shi believes the new approach, which the team calls “tensor holography,” will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.</p>



<p>Shi worked on the study, published on March 10, 2021, in&nbsp;<em>Nature</em>, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).</p>



<h4 class="wp-block-heading">The quest for better 3D</h4>



<p>A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image.</p>



<p>In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth. So, while a photograph of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.</p>



<p>First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth. &nbsp;The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.</p>



<p>Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photorealistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.</p>



<p>They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.</p>



<p>The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.</p>



<p>By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.</p>



<p>“We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.</p>



<p>The research “shows that true 3D holographic displays are practical with only moderate computational requirements,” says Joel Kollin, a principal optical architect at Microsoft who was not involved with the research. He adds that “this paper shows marked improvement in image quality over previous work,” which will “add realism and comfort for the viewer.” Kollin also hints at the possibility that holographic displays like this could even be customized to a viewer’s ophthalmic prescription. “Holographic displays can correct for aberrations in the eye. This makes it possible for a display image sharper than what the user could see with contacts or glasses, which only correct for low order aberrations like focus and astigmatism.”</p>



<h4 class="wp-block-heading">“A considerable leap”</h4>



<p>Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.</p>



<p>Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.</p>



<p>“It’s a considerable leap that could completely change people’s attitudes toward holography,” says Matusik. “We feel like neural networks were born for this task.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/using-artificial-intelligence-to-generate-3d-holograms-in-real-time-on-a-smartphone/">Using Artificial Intelligence to Generate 3D Holograms in Real-Time on a Smartphone</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/using-artificial-intelligence-to-generate-3d-holograms-in-real-time-on-a-smartphone/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning predicts schizophrenia relapses using smartphone data</title>
		<link>https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 16 Oct 2020 07:02:18 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Behavior]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[schizophrenia]]></category>
		<category><![CDATA[smartphone]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12269</guid>

					<description><![CDATA[<p>Source: newatlas.com A pair of newly published studies are demonstrating how passive smartphone data can be used to effectively predict relapse episodes in schizophrenia patients. The research used machine learning to analyze behavioral data and predict schizophrenic relapses up to one month before they occurred. The data used in both new papers was gathered from <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/">Machine learning predicts schizophrenia relapses using smartphone data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: newatlas.com</p>



<p>A pair of newly published studies are demonstrating how passive smartphone data can be used to effectively predict relapse episodes in schizophrenia patients. The research used machine learning to analyze behavioral data and predict schizophrenic relapses up to one month before they occurred.</p>



<p>The data used in both new papers was gathered from a cohort of 60 subjects with schizophrenia. Passive smartphone data, such as accelerometer readings and phone call metadata (such as frequency of calls and durations) was captured for the entire cohort. Eighteen of the subjects suffered a schizophrenic relapse during the course of the study.</p>



<p>A type of machine learning, dubbed encoder-decoder neural networks, was then used to analyzed the mass of data looking for anomalous behavioral patterns within 30 days of a major relapse. The results revealed an 108 percent increase in behavior anomalies could be detected in the month leading up to a relapse, suggesting this kind of system may be useful for detecting and treating patients before a major schizophrenic episode arises.</p>



<p>“We tried to create an approach where we could tell a clinician: not only is this participant experiencing unusual behavior, these are the specific things that are different in this particular patient,” says Dan Adler, a researcher from Cornell Tech working on the project. “If we can predict when someone’s symptoms are going to change before relapse, we can get them early treatment and possibly prevent an inpatient visit.”</p>



<p>As well as predicting relapses ahead of time, the system could effectively predict patients&#8217; self-assessments of their conditions. And a more granular analysis of the data revealed fine-grained symptom changes could also be predicted.</p>



<p>Different kinds of behavioral patterns, as tracked through passive smartphone data, could be associated with specific symptom characteristics. One of the papers, published in the journal&nbsp;<em>Scientific Reports</em>, strikingly presents a hypothetical scenario whereby the system itself could conceivably intervene in real-time to help guide subjects toward behavioral patterns that prevent a looming relapse.</p>



<p>“For example, if there is an unusual change in the ultradian rhythm of environment noise for a couple of hours, the system can prompt the patient to move to an environment that has a lower and more stable level of ambient noise to prevent the noise from affecting the patients’ cognitive performance,” the researchers write. “If the system notices that the patient’s phone usage in certain periods, for example in evening, has a very different pattern than in other periods (morning and afternoon), the system can intervene to change the patient’s phone usage pattern, delaying the arrival of phone notifications for instance, to avoid an increase in stress.”</p>



<p>anzeem Choudhury, from Cornell Tech and co-author on both of the new papers, suggests the system developed could be appropriated for many mental health conditions. Even major depressive episodes, he suggests, could be predicted ahead of time by passively tracking extreme behavioral changes.</p>



<p>“By focusing on changes in behavioral routines and misalignment with underlying biological rhythms, we expect our approach to generate clinically actionable insights that generalize across a diverse demographic of users,” says Choudhury.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/">Machine learning predicts schizophrenia relapses using smartphone data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Use of Big Data – a missed opportunity</title>
		<link>https://www.aiuniverse.xyz/use-of-big-data-a-missed-opportunity/</link>
					<comments>https://www.aiuniverse.xyz/use-of-big-data-a-missed-opportunity/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 13 Oct 2020 12:02:33 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data analytics]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[IT or tech companies]]></category>
		<category><![CDATA[smartphone]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12177</guid>

					<description><![CDATA[<p>Source: tribune.com.pk ISLAMABAD: Everything is data, even the reading of this article (online) through computer, tablet or smartphone. By definition, data is a set of qualitative or quantitative variables – it can be structured or unstructured, machine readable or not, digital or analogue, personal or not. Traditional analysis tools and software can be used to <a class="read-more-link" href="https://www.aiuniverse.xyz/use-of-big-data-a-missed-opportunity/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/use-of-big-data-a-missed-opportunity/">Use of Big Data – a missed opportunity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: tribune.com.pk</p>



<p><strong>ISLAMABAD:</strong></p>



<p>Everything is data, even the reading of this article (online) through computer, tablet or smartphone.</p>



<p>By definition, data is a set of qualitative or quantitative variables – it can be structured or unstructured, machine readable or not, digital or analogue, personal or not. Traditional analysis tools and software can be used to analyse and “crunch” data. We all are creating heaps of data every day, through our online, and even offline, activities.</p>



<p>Big Data is a term used to describe a set of tools, methodologies and techniques to find value and insights into the raw and complex datasets. In the context of Pakistan, I will touch upon four aspects of Big Data &#8211; recognition, value, use and ownership.</p>



<p>In the case of recognition, while there are some private sector companies banking on collection, dissemination and analysis of the data, a visible absence of public sector focus is notable. Even the relevant authorities such as the Ministry of IT and Telecom are focused more on regulatory, that too personal data, aspects rather than recognising Big Data as an asset with huge potential.</p>



<p>In fact, the private sector players are mostly foreign companies rather than the national ones.</p>



<p>The issue of value follows lack of recognition, thus ignoring the potential. According to a report of MarketsandMarkets, the global Big Data market size will grow from $138.9 billion in 2020 to $229.4 billion by 2025 at a compound annual growth rate (CAGR) of 10.6%.</p>



<p>Bigger and more connected countries will generate more data, resulting in more value for the Big Data. The benefit of that value, however, depends on who gets hold of the Big Data and puts it to use.</p>



<p>Pakistan, being the fifth most populous country, with quite significant IT and connectivity infrastructure, is a goldmine of data.</p>



<p>The use of data, particularly Big Data, is a huge missed opportunity, but not too late though. Using Big Data and analytics is probably one of the best governance tools available at present, particularly for resource-constrained countries like Pakistan.</p>



<p>It doesn’t take much to analyse the Big Data, develop decision support systems, transparency tools and service delivery mechanisms. Although not exactly in the realm of data, there is importance of using blockchain technologies for ensuring transparency and traceability in governance systems, mechanisms and tools.</p>



<p>Ownership of data is probably the biggest challenge, not only for Pakistan. It is still a fluid area that needs development of good practices, norms and regulatory frameworks. There are, however, a few things that many countries are doing such as data domicile and data privacy requirements. The European Union’s General Data Protection Regulation is one such example.</p>



<p>Data protection</p>



<p>In Pakistan, there is a legislative initiative in progress titled Personal Data Protection Bill 2020. This, however, is focused on personal data protection and not encompassing enough to take care of troves of data that is gathered by various operators, including foreign companies, in Pakistan.</p>



<p>Moreover, the aforementioned draft bill focuses more on misuse of personal data but is silent on who and how much someone may use it, thus extracting value that may have been someone else’s.</p>



<p>The draft bill also has loose data retention and data domicile requirement, thus, giving carte blanche to data users.</p>



<p>Data is being generated every second, captured and put to various uses by one or the other. As consumers, we are willingly, yet unknowingly, giving data for free in the form of bits and bytes but once it is put together in the form of Big Data, these bits are converted into billions of dollars in direct and indirect value.</p>



<p>Who should take care of this? Regardless, someone out there is certainly doing it, if not the state of Pakistan.</p>



<p>Big Data is an asset and a capital. It has a value only if that value is discovered. Think of the world’s largest companies &#8211; often one would term these IT or tech companies, such as Google, Facebook, Amazon and Alibaba. These are mining on data, Big Data precisely.</p>



<p>Data keeps on adding value the more it is used. There is, however, a catch that you cannot catch the data once it goes out of hands. Every passing moment is adding to the quantity and value of data being generated in Pakistan but it is also taking the ownership and use further away.</p>



<p>The government of Pakistan should provide an enabling environment for the private sector to benefit from the Big Data, develop analytics and related tools to be used in businesses and governance, but most importantly develop a regulatory framework to deal with the data ownership issues. A reactive response in this area is not even an option.</p>
<p>The post <a href="https://www.aiuniverse.xyz/use-of-big-data-a-missed-opportunity/">Use of Big Data – a missed opportunity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/use-of-big-data-a-missed-opportunity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>HOW ARTIFICIAL INTELLIGENCE IS EMPOWERING THE EDUCATION SECTOR?</title>
		<link>https://www.aiuniverse.xyz/how-artificial-intelligence-is-empowering-the-education-sector/</link>
					<comments>https://www.aiuniverse.xyz/how-artificial-intelligence-is-empowering-the-education-sector/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 13 Oct 2020 10:08:10 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Biometric Verification]]></category>
		<category><![CDATA[Education]]></category>
		<category><![CDATA[Google Assistant]]></category>
		<category><![CDATA[smartphone]]></category>
		<category><![CDATA[voice assistants]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12157</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net Artificial Intelligence is Improving Education Sector like never before We’re in 2020 and long past the days back when we used to stand outside the school library to get the opportunity to copy two or three Encyclopedia pages, to use as a kind of reference for our school projects. With this age having <a class="read-more-link" href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-empowering-the-education-sector/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-empowering-the-education-sector/">HOW ARTIFICIAL INTELLIGENCE IS EMPOWERING THE EDUCATION SECTOR?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<h3 class="wp-block-heading">Artificial Intelligence is Improving Education Sector like never before</h3>



<p>We’re in 2020 and long past the days back when we used to stand outside the school library to get the opportunity to copy two or three Encyclopedia pages, to use as a kind of reference for our school projects.</p>



<p>With this age having grown up with the benefit of access to technology at their fingertips, the field of education has hugely changed and overturned in this digitally driven world. Artificial Intelligence in the education market was worth US$2.022 billion for the year 2019.</p>



<p>The worldwide AI in the education market is anticipated to be valued at USD 3.68 billion by 2023, at a CAGR of 47% during the forecast period of&nbsp; 2018 till 2023. Artificial intelligence has already infiltrated our lives on an individual level. By 2030, India will have the biggest number of youngsters in the globe, a size which will be a shelter in particular if these youngsters are sufficiently skilled to join the workforce. The recently launched SDG Index 2019-2020 by Niti Aayog appointed a composite score of 58 to India under the SDG on Quality Education, with just 12 states/UTs having a score of more than 64.</p>



<p>The current government consumption on education is under 3% of the GDP and the pupil-teacher ratio for primary school remains at 24:1, lower than that of Brazil and China. Further, with the quickly expanding population and decreasing assets, it would not be conceivable to match the demand for teachers.</p>



<p>As per a study by creative strategies, Around 97% of smartphone users are utilizing AI-driven voice assistants like Siri and the Google Assistant.</p>



<p>Artificial intelligence significantly utilizes deep learning, machine learning, and advanced analytics particularly for checking the learning cycle of the student, for example, the marks acquired and speed of a specific individual among others. Likewise, these solutions offer a personalized learning experience and top-notch training and cause the students to upgrade prior knowledge and learning. Let’s look at some of the ways AI is changing the education sector.</p>



<h4 class="wp-block-heading">Voice Assistants</h4>



<p>One more AI segment being productively utilized by educators in learning is voice assistants. These incorporate Amazon’s Alexa, Apple Siri, Microsoft Cortana, and so on. These voice assistants permit the students to converse with educational materials without the inclusion of the educator. They can be utilized in home and non-educational environments for encouraging communication with educational material or to get to any additional learning help.</p>



<p>The point behind these voice assistants is to gracefully respond to all normal questions regarding campus needs as well as for it to be modified for the specific timetable and courses of each student. This aids in lessening the prerequisite for internal support and chops down the cost of printing school handbooks which are only temporarily utilized.</p>



<p>The work of these voice assistant systems breaks the dreariness and gets an energizing prospect for the students. The deployment of this technology is anticipated to rise in the coming years.</p>



<h4 class="wp-block-heading">Biometric Verification</h4>



<p>Mundane and support tasks of the teacher – participation and other regulatory tasks can be taken over by AI. For instance, biometric validation for the students can be presented and incorporated with UDISE+ (Unified District Information System for Education) – an application that is one of the biggest Management Information Systems on School Education.</p>



<p>The biometric attendance records could likewise be utilized as a proxy&nbsp; for comprehensiveness of the education in the district/state/block and can be handily tracked. This also helps to screen the national indicators, for example, the participation rate of youth and adults and the extent of male-female enrolled in higher education, technical and vocational education. Further, it can help evaluate the quality of education in the school.</p>



<h4 class="wp-block-heading">Personalized Learning</h4>



<p>Artificial Intelligence is being utilized for personalizing learning for every student. With the work of the hyper-personalization idea which is empowered through machine learning, the AI innovation is consolidated to plan a customized learning profile for every individual students and to tailor-make their training materials, thinking about the method of learning favored by the student, the student’s capacity and experience on an individual basis.</p>



<p>Different AI-fueled applications and frameworks help the students in getting instant and customized responses as well as in getting their questions cleared from their educators. Artificial intelligence is additionally playing a role in augmenting tutoring and designing personal conversational, education assistants who can offer them help in education.</p>



<h4 class="wp-block-heading">Automated Grading</h4>



<p>With Draft National Education Policy 2019 prioritizing online learning in its plan, machine learning techniques, for example, Natural Language Processing could be utilized for automated grading of assessments for a huge scale on platforms, for example, DIKSHA, E-PATHSHALA and SWAYAM (Study Webs of Active Learning for Young Aspiring Minds) – objective questions and subjective ones. Automated creation of content is another field where AI can intercede – given enormous sources of data on the web, NLP methods will have the option to utilize Automatic Text Summarization to make crisp substance and distribute them on these e-learning sites.</p>



<p>The standard unified curriculum made by ML-based techniques will be in accordance with the broadly characterized learning results (MHRD has planned a 70 markers-based matrix called Performance Grading Index (PGI) to level the states and UTs) and will impartially help assess pointers on the level of students accomplishing at least a minimum proficiency level.</p>



<h4 class="wp-block-heading">Conclusion</h4>



<p>We can expect Artificial Intelligence and ML to possess an integral place in all educational experiences. Artificial intelligence has begun to demonstrate its favorable circumstances and power in a wide range of educational areas, and it is not yet clear how the innovation will enable and upgrade overall learning outcomes for all.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-empowering-the-education-sector/">HOW ARTIFICIAL INTELLIGENCE IS EMPOWERING THE EDUCATION SECTOR?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-artificial-intelligence-is-empowering-the-education-sector/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Scottish Police bought a fleet of devices for smartphone data-mining</title>
		<link>https://www.aiuniverse.xyz/scottish-police-bought-a-fleet-of-devices-for-smartphone-data-mining/</link>
					<comments>https://www.aiuniverse.xyz/scottish-police-bought-a-fleet-of-devices-for-smartphone-data-mining/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Jan 2020 08:10:39 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[devices]]></category>
		<category><![CDATA[Scottish Police]]></category>
		<category><![CDATA[smartphone]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6209</guid>

					<description><![CDATA[<p>Source: engadget.com Police in Scotland are getting ready to roll out a fleet of &#8216;cyber kiosks&#8217; that will allow them to mine device data for evidence. The kiosks &#8212; PC-sized machines &#8212; have been designed to help investigations progress faster. At the moment, devices can be taken from witnesses, victims and suspects for months at <a class="read-more-link" href="https://www.aiuniverse.xyz/scottish-police-bought-a-fleet-of-devices-for-smartphone-data-mining/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/scottish-police-bought-a-fleet-of-devices-for-smartphone-data-mining/">Scottish Police bought a fleet of devices for smartphone data-mining</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: engadget.com</p>



<p> Police in Scotland are getting ready to roll out a fleet of &#8216;cyber kiosks&#8217; that will allow them to mine device data for evidence. The kiosks &#8212; PC-sized machines &#8212; have been designed to help investigations progress faster. At the moment, devices can be taken from witnesses, victims and suspects for months at a time, even if they contain no worthwhile evidence. According to Police Scotland, the kiosks will enable officers to quickly scan a device for evidence, and if relevant information is found, the device will be sent on for further investigation. If not, it can be returned to its owner straight away. </p>



<p>The kiosks are not able to save any data &#8212; they can only display it to an investigating officer. The only information the kiosks retain is details on how they have been used &#8212; by whom and at what times. The software is also able to segregate data based on type (such as messages or pictures) and date range, to help officers more quickly find what they&#8217;re looking for. Deputy chief constable Malcolm Graham said that &#8220;By quickly identifying devices which do and do not contain evidence, we can minimise the intrusion on people&#8217;s lives and provide a better service to the public.&#8221;</p>



<p>Police Scotland says it consulted a variety of groups and experts before commissioning the technology, and has given assurances that it will only examine a digital device where there is &#8220;a legal basis and where it is necessary, justified and proportionate to the incident or crime under investigation.&#8221; While the kiosks have the ability to bypass passwords and lockscreens, this will only be done after consultation with the police cybercrime unit. However, some critics have voiced concerns regarding data privacy and abuses of power &#8212; a growing narrative around the globe. The roll-out of the cyber kiosks – which will begin in Scotland on 20th January &#8212; comes only a few days after it emerged the FBI extracted data from a locked iPhone in the US, prompting fresh concerns over civil freedoms.</p>
<p>The post <a href="https://www.aiuniverse.xyz/scottish-police-bought-a-fleet-of-devices-for-smartphone-data-mining/">Scottish Police bought a fleet of devices for smartphone data-mining</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/scottish-police-bought-a-fleet-of-devices-for-smartphone-data-mining/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning: can your smartphone reduce your commuting stress?</title>
		<link>https://www.aiuniverse.xyz/machine-learning-can-your-smartphone-reduce-your-commuting-stress/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-can-your-smartphone-reduce-your-commuting-stress/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 13 Oct 2018 06:49:32 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Commuting]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[smartphone]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3021</guid>

					<description><![CDATA[<p>Source- scitecheuropa.eu The University of Sussex has gathered a large set of data which they believe could be used by a machine learning system to reduce commuting stress. Some user issues with commuting are that it delays or disruptions to public transport or traffic can make it difficult to quickly assess the optimum route for your journey. The dataset could <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-can-your-smartphone-reduce-your-commuting-stress/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-can-your-smartphone-reduce-your-commuting-stress/">Machine learning: can your smartphone reduce your commuting stress?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source- <a href="https://www.scitecheuropa.eu/machine-learning-commuting-stress/89819/" target="_blank" rel="noopener">scitecheuropa.eu</a></p>
<h2>The University of Sussex has gathered a large set of data which they believe could be used by a machine learning system to reduce commuting stress.</h2>
<p>Some user issues with commuting are that it delays or disruptions to public transport or traffic can make it difficult to quickly assess the optimum route for your journey. The dataset could potentially be used by a machine learning system to optimise your journey and alleviate commuting stress based on various factors such as your location, and the food and drink you consume on the way.</p>
<p>The report presents ‘a meta-analysis of the contributions from 19 submissions, their approaches, the software tools used, computational cost and the achieved result.’ The findings of the project will be presented in full today in Singapore.</p>
<h3>How could machine learning be used to reduce commuting stress?</h3>
<p>The researchers believe that this technology could be used for various functions to reduce commuting stress. The report said: ‘A machine-learning system can use this data set to automatically recognize modes of transportations’.</p>
<p>The functions that machine learning systems could perform include:</p>
<ul>
<li>Predicting upcoming road conditions and traffic to offer route and parking recommendations;</li>
<li>Detect the food and drink consumption of the smartphone user; and</li>
<li>Tracking the location and journey of the user.</li>
</ul>
<h3>The dataset</h3>
<p>The researchers work at the Wearable Technologies Lab at the University of Sussex. The held a global research competition to develop machine learning techniques. The competition asked teams to compete for the most accurate algorithms designed for recognising the different types of transport from fifteen sensors. The fifteen sensors measured a range of factors including movement and ambient pressure.</p>
<p>The overall task was to ‘train a recognition pipeline using the training dataset and then use this system to recognize the transportation mode from the sensor data in the testing set.’</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-can-your-smartphone-reduce-your-commuting-stress/">Machine learning: can your smartphone reduce your commuting stress?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-can-your-smartphone-reduce-your-commuting-stress/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>7 Ways in Which Artificial Intelligence Will Change Healthcare</title>
		<link>https://www.aiuniverse.xyz/7-ways-in-which-artificial-intelligence-will-change-healthcare/</link>
					<comments>https://www.aiuniverse.xyz/7-ways-in-which-artificial-intelligence-will-change-healthcare/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 28 Aug 2018 07:32:26 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cancer therapies]]></category>
		<category><![CDATA[Clinical Data Science]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[smartphone]]></category>
		<category><![CDATA[startup]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2793</guid>

					<description><![CDATA[<p>Source &#8211; techstory.in Healthcare has always been a difficult industry to tap into. Every day, new devices and procedures are launched but are not adapted readily because patients may be at risks. Yet, in the past one decade, a major player has entered the healthcare industry – artificial intelligence. While this is not a singular product, <a class="read-more-link" href="https://www.aiuniverse.xyz/7-ways-in-which-artificial-intelligence-will-change-healthcare/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/7-ways-in-which-artificial-intelligence-will-change-healthcare/">7 Ways in Which Artificial Intelligence Will Change Healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; techstory.in</p>
<p>Healthcare has always been a difficult industry to tap into. Every day, new devices and procedures are launched but are not adapted readily because patients may be at risks. Yet, in the past one decade, a major player has entered the healthcare industry – artificial intelligence.</p>
<p>While this is not a singular product, or service, or a startup, it is still a development sought by many. Artificial intelligence brings to the fore solutions for many discrepancies that are currently present in the healthcare industry.</p>
<h2>Here are 7 ways in which AI will impact healthcare:<br />
<strong>Wearable Health Measures</strong></h2>
<p>Almost all consumers today have the access to a FitBit or a Smartwatch or some kinds of electric trackers. While most of these wearables are used to get connected to your smartphone or check your fitness activity, they can also help one track their daily health.</p>
<p>With in-built measurement qualities like a pedometer, heart-rate tracker, and sleep tracker, these wearables can generate 24 by 7 worth of data related to your well being and health. The collection of this round-the-clock data and then its addition as a complement to one’s regular doctor visits can change as to how diagnoses are perceived.</p>
<p>While this data is still being collected by the wearables, artificial intelligence can determine what to make of this data. It will play a significant role in interpreting data inputs and then in effectively communicating with patients and the doctor.</p>
<p><strong>Providing </strong><strong>Multiple</strong><strong> Data Points and Solutions for Research</strong></p>
<p>While there are tons of data available in the field of medicine, it is very difficult for doctors to go through them, interpret them, and then make a viable diagnosis. Artificial intelligence can take over. AI with its faster computing skills can look at and correlate multiple data points and complex sets of data which will, in turn, help doctors interpret information quickly and more efficiently.</p>
<p>For example, a lot of cancer research depends on the study of immunotherapy. And one of the best ways to get ahead in the field of immunotherapy is to get multiple points of data in large numbers. Cancer therapies are relatively new and collecting data helps doctors determine what drug tests are working on a more populous level.</p>
<figure class="wp-block-image"><img decoding="async" src="https://internetofbusiness.com/wp-content/uploads/2018/03/4-Ways-IoT-is-Enhancing-Modern-Day-Healthcare-640x427.jpg" alt="Image result for healthcare ai" /><figcaption>Credits: Internet of Business</figcaption><strong>Turning Health Records into Risk Predictors:</strong></figure>
<p>As discussed before, one of the main functions of artificial intelligence is to collect data for every passing moment. So, soon the growth of data will be exponential and there will be multiple data points of a patient that health experts can work with.</p>
<p>If programmed well, these data points can be interpreted by AI to detect high-risk scenarios or life-threatening stages in a patient. Once a patient is detected to be at a high risk of contracting a certain disease or developing symptoms, AI can help doctors to get on preventive rather than curative measures.</p>
<p>Illnesses like diabetes, heart attacks, seizures, and Alzheimer’s are preventable if a patient is detected during the high-risk stage. Artificial intelligence helps exactly with this.<br />
<strong>Clinical Solutions:</strong></p>
<p>Artificial intelligence will make a huge impact in the healthcare workspaces. The healthcare industry is generally known to be extremely labor-intensive, in which the end consequence is the lack of quality patient care.</p>
<p>AI will help streamline this. Clinical optimization means that net patient outreach is improved, there is more remote engagement with patients, and easy recording of patients’ healthcare and diseases.</p>
<p>There are healthcare technology solutions (such as Interpreta) that work on AI prove beneficial both for healthcare centers and consumers by creating prioritization software. This software is where high-risk patients and their appointments are prioritized accordingly. This is just on the lens through which clinical optimization and artificial intelligence can be looked at.</p>
<figure class="wp-block-image"><img decoding="async" src="https://healthtechmagazine.net/sites/healthtechmagazine.net/files/articles/%5Bcdw_tech_site%3Afield_site_shortname%5D/201806/healthcareAI.jpg" alt="Image result for healthcare ai" /><figcaption>Credits: HealthTech Magazine</figcaption><strong>Predicting Patient Care:</strong></figure>
<p>Doctors more often than not work on human instincts rather than textbook teachings. Decisions like how long should a patient be resuscitated for, are there any chances for improvement in the long-term scenario, how long will a patient take to recover, are all claims based on instincts and experience.</p>
<p>AI will help take this a step further and remove the guesswork out of patient care. By using previous data points, AI can be successful in predicting the future course of action for patients with more surety.<br />
“…if you have an AI algorithm and lots and lots of data from many patients, it’s easier to match up what you’re seeing to long-term patterns and maybe detect subtle improvements that would impact your decisions around care.”, says Brandon Westover, MD, Ph.D., Director of the MGH Clinical Data Animation Center.<br />
<strong>Precise Analytics for Pathological Images and Reports</strong></p>
<p>The main function of pathological labs is to know what is wrong (or not working) with patients using minimally invasive procedures. Procedures like sonography, MRI scans, CT scans, blood tests, and urine tests are all examples of pathological tests. Conducting these tests is just the first part. Analyzing and implementing a solution according to the test results is another.</p>
<p>A lot of the times, the data in these scans and reports can be misread due to human error and can result in the wrong medication or implementation.</p>
<p>Artificial intelligence and analytics can look at these scans and break them down to the pixel by pixel. This software can identify objects, dimensions, depths, and sizes that can otherwise be misinterpreted by the human eye.</p>
<p>The software can also help in efficiency and productivity by identifying features of interest in these scans. This is to prevent an unneeded wastage of time in areas that are not of particular concern.</p>
<h3>
<strong>Artificial Intelligence in Medical Devices</strong></h3>
<figure class="wp-block-image"><img decoding="async" src="https://cdn.vox-cdn.com/thumbor/QhXjIvPkTZ0GLRR-7loqsTiipUY=/0x0:920x613/1200x800/filters:focal(387x234:533x380)/cdn.vox-cdn.com/uploads/chorus_image/image/60195103/HIMSS_OMB_image.0.png" alt="Image result for healthcare ai" /><figcaption>Credits: The Verge</figcaption></figure>
<p>AI can not only be included in wearables and computer software but can be a part of all our day-to-day devices including the medical ones. These devices can be programmed with respect to each patient and made intelligent as time progresses.</p>
<p>Some healthcare devices that can be made smarter by incorporating AI are heart rate monitors, IV’s, ICU monitors, etc. These devices can be programmed to regularly dispense liquids into patients systems, trigger alarms when abnormalities are detected, and also to understand external tamperings such as mistaken overdoses and abnormal drug measurements.</p>
<p>“When we’re talking about integrating disparate data from across the healthcare system, integrating it, and generating an alert that would alert an ICU doctor to intervene early on – the aggregation of that data is not something that a human can do very well,” said Mark Michalski, MD, Executive Director of the MGH &amp; BWH Center for Clinical Data Science.</p>
<p>The post <a href="https://www.aiuniverse.xyz/7-ways-in-which-artificial-intelligence-will-change-healthcare/">7 Ways in Which Artificial Intelligence Will Change Healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/7-ways-in-which-artificial-intelligence-will-change-healthcare/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>New Deep Learning Strategy Could Enhance Computer Vision</title>
		<link>https://www.aiuniverse.xyz/new-deep-learning-strategy-could-enhance-computer-vision/</link>
					<comments>https://www.aiuniverse.xyz/new-deep-learning-strategy-could-enhance-computer-vision/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 27 Jul 2018 06:00:21 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[deep learning algorithms]]></category>
		<category><![CDATA[smartphone]]></category>
		<category><![CDATA[software applications]]></category>
		<category><![CDATA[visual content]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2668</guid>

					<description><![CDATA[<p>Source &#8211; edgylabs.com A deep learning system takes textual hints from the context of images to describe them without the need for prior human annotations. Since its humble beginnings at the turn of the millennium, deep learning, as both a scientific discipline and an industry, has come a long way. From smartphone assistants to pattern recognition software, <a class="read-more-link" href="https://www.aiuniverse.xyz/new-deep-learning-strategy-could-enhance-computer-vision/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/new-deep-learning-strategy-could-enhance-computer-vision/">New Deep Learning Strategy Could Enhance Computer Vision</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; edgylabs.com</p>
<p><i>A deep learning system takes textual hints from the context of images to describe them without the need for prior human annotations.</i></p>
<p>Since its humble beginnings at the turn of the millennium,<b> deep learning</b>, as both a scientific discipline and an industry, has come a long way.</p>
<p>From smartphone assistants to pattern recognition software, security solutions, and other applications, deep learning is becoming a multi-billion dollar business poised for great growth over the few next years.</p>
<p>However, for <b>deep learning agents</b> to reach their full potential, they have to “learn” how to learn on their own.</p>
<p>Herein lies the whole difference between <b>supervised </b>and <b>unsupervised deep learning.</b></p>
<h2>Self-Supervised Deep Learning</h2>
<p>The power and appeal of deep learning is all about their ability to recognize different types of patterns like faces, voices, objects, images, and codes.</p>
<p>AI software doesn’t understand what these things really are, and all they see is digital data, and they’re pretty good at that.</p>
<p>The great <b>computer vision</b> capability of deep learning algorithms enable them to tell these things apart, categorize, and classify them.</p>
<p>To do so, however, this software needs to be supervised.</p>
<p>They require human manual input in the form of annotations to guide them before they generalize and build on what they learned into new, similar situations.</p>
<p>Building and labeling large datasets is a complicated and time-consuming task.</p>
<p><b>Unsupervised machines</b> will be completely autonomous as all they need is data taken directly from their environment. From there, they would take the information to make predictions and yield the expected results.</p>
<p>To design unsupervised, or <b>self-supervised deep learning </b>systems, computer scientists take inspirations from how human intelligence works.</p>
<p>Now, an international team of computer vision scientists has devised a method to enable deep learning software to learn the visual features of images without the need for annotated examples.</p>
<p>Researchers from <b>Carnegie Mellon University</b> (U.S.), <b>Universitat Autonoma de Barcelona</b> (Spain), and <b>the International Institute of Information Technology</b>(India), worked on the study,</p>
<h3>Unsupervised Computer Vision Algorithms, it’s a Matter of Semantics</h3>
<p>In the study, the team built computational models that use textual information about images found on websites, like Wikipedia, and linked them to the visual features of these images.</p>
<p><i>“We aim to give computers the capability to read and understand textual information in any type of image in the real world,”</i> said Dimosthenis Karatzas, a research team member.</p>
<p>In the next step, researchers used the models to train deep learning algorithms to pick adequate visual features that textually describe images.</p>
<p>Instead of labeled information about the content of a particular image, the algorithm takes non-visual cues from the semantic textual information found around the image.</p>
<p><i>“Our experiments demonstrate state-of-the-art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or naturally-supervised approaches,” </i>wrote researchers in the paper.</p>
<p>This is not a fully unsupervised system as algorithms still need models to train on, but the technique shows that deep learning algorithms can tap into the internet to enhance their unsupervised learning abilities.</p>
<p><i>“We will continue our work on the joint-embedding of textual and visual information,” </i>said Karatzas.<i> “looking for novel ways to perform semantic retrieval by tapping on noisy information available in the Web and Social Media.”</i></p>
<p>The post <a href="https://www.aiuniverse.xyz/new-deep-learning-strategy-could-enhance-computer-vision/">New Deep Learning Strategy Could Enhance Computer Vision</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/new-deep-learning-strategy-could-enhance-computer-vision/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence: Redefining photography in the smartphone world</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-redefining-photography-in-the-smartphone-world/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-redefining-photography-in-the-smartphone-world/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 23 May 2018 05:40:25 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI applications]]></category>
		<category><![CDATA[AI tech]]></category>
		<category><![CDATA[photography]]></category>
		<category><![CDATA[smartphone]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2443</guid>

					<description><![CDATA[<p>Source &#8211; indiatimes.com By Will Yang Technology in today’s day and age has enabled a human to do things and accomplish far more than one could think of a few years back. Thanks to rapidly evolving and innovative technologies, personal lives have become more enriched. Meaningful collaborations between a human and machine/technology has in many <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-redefining-photography-in-the-smartphone-world/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-redefining-photography-in-the-smartphone-world/">Artificial Intelligence: Redefining photography in the smartphone world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; indiatimes.com</p>
<p><strong>By Will Yang</strong></p>
<p>Technology in today’s day and age has enabled a human to do things and accomplish far more than one could think of a few years back. Thanks to rapidly evolving and innovative technologies, personal lives have become more enriched. Meaningful collaborations between a human and machine/technology has in many ways provided a wealth of opportunities to us making our lives comfortable. One such technology buzzword in the industry today is Artificial Intelligence. Once a topic for science fiction, Artificial Intelligence technology is now being used by brands across industries and categories. Artificial Intelligence is essentially the creation of systems that use advance analytic strategy, especially machine learning and deep learning to accomplish things that previous we thought only people could do. Thus, brands are now leveraging the technology and innovating constantly to offer efficient solutions to everyday problems &#8211; from reminders, personalized experiences to tailored medical treatments, Artificial Intelligence promises to transform the way we live by smartly assisting us.</p>
<p><strong>Artificial Intelligence Applications in Everyday Life</strong></p>
<p>Every human today in one way or other is embracing Artificial Intelligence, some with relevant knowledge and understanding of the concept and others out of habit. Popular applications of Artificial Intelligence in everyday use include Smart Personal Assistants like Siri by Apple, Google Assistant and Amazon’s Alexa. These assistants use Artificial Intelligence as they collect information on an individual’s requests and use that information to better recognize our speech and serve us results that are tailored to our preferences. Video games also use Artificial Intelligence to enhance consumer experience where the characters come alive by learning the gamer’s behaviours, respond to stimuli and react in unpredictable ways. Another great example is the widely discussed and trending topic – smart cars. While it still has a long way to go, Artificial Intelligence will be able to enable a car look ahead of it and make decisions as it goes and keep improving and learning the process. Even social media apps like Facebook use Artificial Intelligence to recognize faces and personalize your news feed. If you notice, you consume the most of this technology on your smartphone. This is because it’s your constant. Wherever you go, you carry it, it’s like an extension of your personality. Artificial Intelligence has become an important aspect towards transforming the modern smartphones, soon our handheld devices will be referred to as ‘Intelligent phones’.</p>
<p><strong>New Chapter in Smartphone Photography</strong></p>
<p>One of the most advanced innovations using Artificial Intelligence has been done by smartphone brands in the world of photography. Many brands are offering Artificial Intelligence powered cameras that has completely redefined smartphone photography, to an extent that photographers are using these advanced phone cameras to capture images instead of DSLRs. It can be easily said that brands are investing in this sphere and using Artificial Intelligence to enable the phone’s camera interface to detect the subject in the camera frame and accordingly adjust the settings for the best possible image.</p>
<p><strong>Artificial Intelligence Creating Ripples in the Front Camera Segment</strong></p>
<p>With the love of selfie snapping becoming an increasing global trend and the rage taking over millions where people are uploading their selfies all over their social media accounts, selfie or front cameras have become equally important. Earlier consumers based their buying decision on the rear camera but in recent times the focus has shifted to the front camera. Realizing this trend, brands are now investing in front camera technologies that make user’s lives more beautiful via the front camera experience. Artificial Intelligence can identify facial features and automatically enhance them for a superior portrait. It does so by customizing or offering personalization for the user or subjects within an image. The smart technology enables the camera to recognize skin tone and type, gender and age of all subjects in the image, environment lighting, while referencing hundreds and thousands of global user photo database and learn from past selections, to optimize each selfie shot.</p>
<p><strong>What’s Up Ahead?</strong></p>
<p>As per a 2017 Counterpoint Research, one in three smartphones, roughly more than half a billion, shipped in 2020 will come with chipset-level integration with machine learning and Artificial Intelligence. Thus, we can definitely say that the smartphone’s future is all about the camera. We saw a lot of developments in the camera technology in 2017, but 2018 could be the year where Artificial Intelligence comes into play strongly, completely redefining the way we one uses their phone cameras and a lot more from writing emails to distant relatives to having awkward conversations with Siri or Alexa. The era of Artificial intelligence is driven by machine learning, extreme automation and omnipresent connectivity. The near future will only see a rise in the usage of AI tech and advancements to perfection.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-redefining-photography-in-the-smartphone-world/">Artificial Intelligence: Redefining photography in the smartphone world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-redefining-photography-in-the-smartphone-world/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Google pitches artificial intelligence to help unplug</title>
		<link>https://www.aiuniverse.xyz/google-pitches-artificial-intelligence-to-help-unplug/</link>
					<comments>https://www.aiuniverse.xyz/google-pitches-artificial-intelligence-to-help-unplug/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 10 May 2018 07:01:35 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[AI tools]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[smartphone]]></category>
		<category><![CDATA[Sundar Pichai]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2343</guid>

					<description><![CDATA[<p>Source &#8211; indiatimes.com Google has unveiled an artificial intelligence tool capable of handling routine tasks &#8212; such as making restaurant bookings &#8212; as a way to help people disconnect from their smartphone screens. Kicking off the tech giant&#8217;s annual developers conference, Google chief executive Sundar Pichai argued that its AI-powered digital assistant had the potential to <a class="read-more-link" href="https://www.aiuniverse.xyz/google-pitches-artificial-intelligence-to-help-unplug/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-pitches-artificial-intelligence-to-help-unplug/">Google pitches artificial intelligence to help unplug</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>Source &#8211; indiatimes.com</div>
<div>Google has unveiled an artificial intelligence tool capable of handling routine tasks &#8212; such as making restaurant bookings &#8212; as a way to help people disconnect from their smartphone screens.</p>
<p>Kicking off the tech giant&#8217;s annual developers conference, Google chief executive Sundar Pichai argued that its AI-powered digital assistant had the potential to free people from everyday chores.</p>
<p>Pichai played a recording of the Google Assistant independently calling a hair salon and a restaurant to make bookings &#8212; interacting with staff who evidently didn&#8217;t realize they were dealing with artificial intelligence software, rather than a real customer.</p>
<p>Tell the Google Assistant to book a table for four at 6:00 pm, it tends to the phone call in a human-sounding voice complete with &#8220;ums&#8221; and &#8220;likes,&#8221; and sends you a message with the details.</p>
<div>&#8220;Our vision for our assistant is to help you get things done,&#8221; Pichai told the conference in Google&#8217;s hometown of Mountain View, California.</div>
<div>
&#8220;It turns out that a big part of getting things done is making a phone call.&#8221; Google will be testing the digital assistant improvement in the months ahead.</p>
<div>The conference opened with Silicon Valley facing a wave of criticism over issues such as private data protection, the spread of misinformation and the use of tech platforms for hate speech and violence, and with intense scrutiny of Facebook over the hijacking of data on millions of its users.</p>
<p>&#8220;It&#8217;s clear that technology can be a positive force and improve the quality of life for billions of people around the world.&#8221; Pichai said.</p>
<div>&#8220;But it&#8217;s equally clear that we can&#8217;t just be wide-eyed about what we create.&#8221; He added that &#8220;we feel a deep sense of responsibility to get this right.&#8221; Much of the focus was on Google Assistant, the artificial intelligence application competing against Amazon&#8217;s Alexa and others.</p>
<p>Pichai launched an overhaul Google News venue that put AI to work finding trusted sources for stories and balancing perspectives to provide fuller pictures of breaking developments.</p></div>
<div></div>
<div>&#8220;It uses artificial intelligence to bring forward the best of human intelligence &#8211; great reporting done by journalists around the globe &#8211; and will help you stay on top of what&#8217;s important to you,&#8221; Pichai said of overhauled Google News.</p>
<p>And, evidently popping news &#8216;bubbles&#8217; created by tailoring results to what people want to hear, everyone will be shown the same content on topics, according to product and engineering lead Trystan Upstill.</p>
<div>Google Assistant is also being taught to better understand people and interact with them more naturally &#8212; and will be getting new voices, including one based on the voice of singer John Legend, as well as programming to improve conversation performance.</p>
<p>&#8220;Thanks to our progress in language understanding, you&#8217;ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating &#8216;Hey Google&#8217; for each follow-up request,&#8221; Pichai said.</p>
<div>In another effort to untether people from smartphone screens, a dashboard breaks down time spent on devices and how often they are unlocked. Google also planned to add a &#8220;shush&#8221; mode to its Android mobile software, switching smartphones to a do-not-disturb mode when they are placed face down on a table.</p>
<p>YouTube watchers will be able to set a pop-up message to remind them to take breaks from viewing, according to Pichai.<br />
&#8220;This is going to be a deep, ongoing effort across all our platforms,&#8221; Pichai said.</p>
<p>&#8220;To help you understand habits, focus on what matters, switch off and wind down.&#8221; Google is seeking to make services more personal, relevant and intimate from maps to email, Gartner analyst Brian Blau told AFP after the keynote presentation.</p>
<p>&#8220;The are taking a very human approach to technology, and convincing you people can continue to rely on Google,&#8221; Blau said.</p></div>
</div>
<div></div>
</div>
</div>
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/google-pitches-artificial-intelligence-to-help-unplug/">Google pitches artificial intelligence to help unplug</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-pitches-artificial-intelligence-to-help-unplug/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
