<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>psychology Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/psychology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/psychology/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 11 Jun 2020 05:28:27 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>This lab is revealing what really goes on in a toddler’s brain</title>
		<link>https://www.aiuniverse.xyz/this-lab-is-revealing-what-really-goes-on-in-a-toddlers-brain/</link>
					<comments>https://www.aiuniverse.xyz/this-lab-is-revealing-what-really-goes-on-in-a-toddlers-brain/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 11 Jun 2020 05:28:10 +0000</pubDate>
				<category><![CDATA[natural intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Natural Intelligence]]></category>
		<category><![CDATA[psychology]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9440</guid>

					<description><![CDATA[<p>Source: wired.co.uk In this era of artificial intelligence, it’s ironic that there’s so much that we don’t know about natural intelligence. But details of a missing chapter <a class="read-more-link" href="https://www.aiuniverse.xyz/this-lab-is-revealing-what-really-goes-on-in-a-toddlers-brain/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/this-lab-is-revealing-what-really-goes-on-in-a-toddlers-brain/">This lab is revealing what really goes on in a toddler’s brain</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: wired.co.uk</p>



<p>In this era of artificial intelligence, it’s ironic that there’s so much that we don’t know about natural intelligence. But details of a missing chapter in the story of the brain are about to emerge from a new multi-million pound laboratory that will use wireless and wearable technologies to get inside the heads of toddlers.</p>



<p>The Wohl Wolfson ToddlerLab, part of Birkbeck University’s Centre for Brain and Cognitive Development (CBCD), is due to open in London’s Torrington Square in June. Inside, scientists will be able to scan the brains, monitor the gaze, and chart the hormone levels of one- to three-year-olds as they play in a series of real and virtual environments.</p>



<p>ToddlerLab will be a “world first,” says Denis Mareschal, director of the Centre, who has been working on the project for four years. “There is a black hole in our understanding of the development of toddler’s brains.”</p>



<p>It’s a successor to Birkbeck’s BabyLab, which has led the way in studying brain development since the CBCD opened in 1998. It revealed how babies can learn how the world works from surprising events, linking cause and effect. Researchers use eye tracking to explore what babies are thinking about, and demonstrate how they can figure out what mum or dad mean when they say the word &#8220;brick&#8221; by tracing their gaze to a piece of Lego. Sensor hairnets can register crackles of electrical brain activity as babies play on their parent’s laps, or fathom words in a stream of sounds (to a baby, all languages are foreign).</p>



<p>Among other projects, BabyLab scientists are helping to understand why people with Down’s syndrome do not get Alzheimer’s, studying the effects of screen time on babies as young as six-months, and seeking early signs of behavioural problems such as ADHD.</p>



<p>But toddlers, well, toddle so these methods have had to be adapted to study them in the new purpose-built lab, which abuts a Georgian house. Thanks to almost £40,000 of crowdfunding, £2.1 million from the Maurice Wohl Charitable Foundation and Wolfson Foundation, and £1.2 million from other backers, ToddlerLab will be equipped with wireless, wearable versions of motion trackers, hairnet sensors and functional near-infrared spectroscopy, in which light absorption is used to measure blood flow in the brain.</p>



<p>The lab includes realistic settings – a typical nursery and home – along with the CAVE, an immersive, VR environment that can recreate farm, supermarket or other surroundings. “Toddlers are active, curious and want to explore,” says Mareschal. “The lab will allow them to roam and behave as they would in the normal world.”</p>



<p>The lab enables researchers to see how children react in different circumstances, and – crucially – with other children present. Some disorders only emerge when toddlers interact with their peers. “Having lots of other children around brings out the difficulties these children have in engaging what others are thinking and how to respond,” he says.</p>



<p>A &#8220;biosamples collection suite&#8221; will take urine and other samples to study hormones such as cortisol, which is linked with anxiety, and oxytocin, which is released during social bonding. And a &#8220;nap lab&#8221; will monitor the effects of sleep on brain activity and learning.</p>



<p>The team want to use their new suite of tools to track the extraordinary changes in toddlerhood, when fatty sheaths of myelin boost the ability of nerve cells to conduct signals, swelling the brain as a result. The number of synapses grows from 2,500 per neuron to 15,000 by the age of three and major changes occur in the frontal system, which is central for intelligence, problem solving and organisation.</p>



<p>To investigate, ToddlerLab will focus on how toddlers manage multiple goals. His team will study toddlers as they build houses using Lego, watching the emergence of the critical frontal control system and revealing new insights into behaviours that range from &#8220;the terrible twos&#8221; to being able to speak fluently, solve problems and other core ingredients of intelligence.</p>
<p>The post <a href="https://www.aiuniverse.xyz/this-lab-is-revealing-what-really-goes-on-in-a-toddlers-brain/">This lab is revealing what really goes on in a toddler’s brain</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/this-lab-is-revealing-what-really-goes-on-in-a-toddlers-brain/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The psychology of human creativity helps artificial intelligence imagine the unknown</title>
		<link>https://www.aiuniverse.xyz/the-psychology-of-human-creativity-helps-artificial-intelligence-imagine-the-unknown/</link>
					<comments>https://www.aiuniverse.xyz/the-psychology-of-human-creativity-helps-artificial-intelligence-imagine-the-unknown/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 15 Jan 2020 07:43:39 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[human]]></category>
		<category><![CDATA[imagine]]></category>
		<category><![CDATA[psychology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6161</guid>

					<description><![CDATA[<p>Source: techxplore.com By learning to deviate from known information in the same way that humans do, an &#8220;imagination&#8221; algorithm for artificial intelligence (AI) is able to identify <a class="read-more-link" href="https://www.aiuniverse.xyz/the-psychology-of-human-creativity-helps-artificial-intelligence-imagine-the-unknown/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-psychology-of-human-creativity-helps-artificial-intelligence-imagine-the-unknown/">The psychology of human creativity helps artificial intelligence imagine the unknown</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techxplore.com</p>



<p>By learning to deviate from known information in the same way that humans do, an &#8220;imagination&#8221; algorithm for artificial intelligence (AI) is able to identify previously unseen objects from written descriptions. </p>



<p>The algorithm, developed by KAUST researcher Mohamed Elhoseiny in collaboration with Mohamed Elfeki from the University of Central Florida, paves the way for artificial imagination and the automated classification of new plant and animal species.</p>



<p>&#8220;Imagination is one of the key properties of human intelligence that enables us not only to generate creative products like art and music, but also to understand the visual world,&#8221; explains Elhoseiny.</p>



<p>Artificial intelligence relies on training data to develop its ability to recognize objects and respond to its environment. Humans also develop this ability through accumulated experience, but humans can do something that AI cannot. They can intuitively deduce a likely classification for a previously unencountered object by imagining what something must look like from a written description or by inference from something similar. In AI, this ability to imagine the unknown is becoming increasingly important as the technology is rolled out into complex real-world applications where misclassification or misrecognition of new objects can prove disastrous.</p>



<p>Also important is the sheer volume of data needed to reliably train AI for the real world. It is unfeasible to train AI with images of even a fraction of the known species of plants and animals in the world in all their permutations, let alone the countless undiscovered or unclassified species.</p>



<p>Elhoseiny and Elfeki&#8217;s research aimed at developing what is called a zero-shot learning (ZSL) algorithm to help with the recognition of previously unseen categories based on class-level descriptions with no training examples.</p>



<p>&#8220;We modeled the visual learning process for &#8216;unseen&#8217; categories by relating ZSL to human creativity, observing that ZSL is about recognizing the unseen while creativity is about creating a &#8216;likable unseen,'&#8221; says Elhoseiny.</p>



<p>In creativity, something novel but pleasing or &#8220;likable&#8221; must be different from previous art, but not so different as to be unrecognizable. In the same way, Elhoseiny and Elfeki carefully modeled a learning signal that inductively encourages deviation from seen classes, yet not pushed so far that the imagined class becomes unrealistic and loses knowledge transfer from seen classes. The resultant algorithm showed a consistent improvement over the state-of-the-art benchmarks for ZSL.</p>



<p>&#8220;One of the possible applications of our approach is in identifying unknown species,&#8221; says Elhoseiny. &#8220;AI that is powered with this technology could help report species sightings without pictures, just with language descriptions.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-psychology-of-human-creativity-helps-artificial-intelligence-imagine-the-unknown/">The psychology of human creativity helps artificial intelligence imagine the unknown</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-psychology-of-human-creativity-helps-artificial-intelligence-imagine-the-unknown/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How psychology is shaping better machine learning</title>
		<link>https://www.aiuniverse.xyz/how-psychology-is-shaping-better-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/how-psychology-is-shaping-better-machine-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 11 Sep 2017 09:13:47 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[digital employee]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[psychology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1052</guid>

					<description><![CDATA[<p>Source &#8211; cmo.com.au More psychologists are now coming the tech space because they&#8217;re trying to teach machines to become more social and sociable, according to BT’s head of <a class="read-more-link" href="https://www.aiuniverse.xyz/how-psychology-is-shaping-better-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-psychology-is-shaping-better-machine-learning/">How psychology is shaping better machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>cmo.com.au</strong></p>
<p>More psychologists are now coming the tech space because they&#8217;re trying to teach machines to become more social and sociable, according to BT’s head of customer insight and futures, Dr Nicola Millard.</p>
<p>“I’m not a technologist, I’m a psychologist – and it makes a lot of sense having me on-board because innovation in itself won’t work unless people adopt it,&#8221; she told CMO. &#8220;A psychologist in the team prevents us from getting carried away exclusively by the technology which often tech companies do<em>.&#8221;</em></p>
<p>Leading the third largest innovation hum in the UK, Millard is responsible for tapping into the research and innovations that BT does for its global services clients.</p>
<p>“I used to have a silly job title as a futurologist, and I hated that job title because everyone assumes you have a crystal ball,” she said. “But I’m in global services so my clients are typically retailers, airlines and banks – big global corporates – and we often bring them around to our showcases of the retail store of the future, or the bank of the future, so they can have a play with our proof of concepts.&#8221;</p>
<p>While Astral Park in the UK is BT&#8217;s main research hub, the company also has other centres dotted across the world including Abu Dhabi, Singapore University in Beijing and MIT in the US. Millard said the company also has tech scouts dotted around the world bringing in new startups.</p>
<p>In her role, Millard looks the changing interactions between consumer and customer, and how customers’ demands and expectations on companies are rapidly evolving, particularly if the company is providing a service. The other element is the concept of the ‘digital employee’ and how the changing nature of employees relates to the future of work.</p>
<p>“From my experience, I see a lot of tech that allows us to collaborate, but there’s also too much tech, which opens up the risk of fragmentation, especially when we all have such diverse workforces,” she said. “Successful collaboration using technology rests on who is actually on it. Look at social media platforms for example.</p>
<p>&#8220;A lot of what I look at in that area is creating common ground, which I define as it has to be absolutely accessible to everyone and it has to be appropriate to the task.”</p>
<p>On a technical level, Millard sees common ground as looking at disparate technologies and getting them talking to each other in the cloud in an integrated way.</p>
<p>“But what I’m also interested in is how to get leaders to choose which common ground works better for them to communicate and collaborate – and that could actually be face to face, because that can be incredibly valuable,” she said. “At BT, we’re looking at how can the digital space, using tools like video, audio, chats, AR, VR, create as good an experience as that face-to-face interaction.</p>
<p>“We also look at who is using which technologies, what the trends are and then how can we then improve the technology and enhance the end experience.”</p>
<p><strong>AI, machine learning and automating the customer experience</strong></p>
<p>As a psychologist, Millard has undertaken a lot of academic work to understand how to make machines more natural to interact with and how to use technology to create better customer experiences – whether it is in the physical or digital world. As a result, she’s considered in great detail how machines and artificial intelligence (AI) can understand and communicate in natural language as part of an effort to create more seamless bot experiences for customers.</p>
<p>“The way language has evolved isn’t really about rules or process, which is what machines like,” she said. “Machines do simple stuff well, but complicated stuff like understanding regional accents, complex emotional complaints or sarcasm, not too well.</p>
<p>“This is because a lot of things we do naturally, they can’t do naturally. Things like empathy, caring, negotiation, innovation, creativity – and this can present some issues in customer service where we’ve been looking at chat bots.”</p>
<p>For instance, Millard recalled a sarcastic customer complaint against a British train company that said ‘thank you for my free sauna this morning’, which the company’s automated bot interpreted it as a good thing.</p>
<p>“We know it’s a bad thing, but why would a machine know that?” she asked. “It’s the subtle stuff that’s difficult for machines at the moment.&#8221;</p>
<p>Where AI does work in customer service, from Millard’s extensive experience, is where it has been given the right data.</p>
<p>“There’s a lot of hype around AI and a lot of our early experiments in this field failed largely because of the cost, but what we learned is you need data to make these things work,” she explained. “Some AI tools are working really well for companies at the moment, but they need to be deployed where there’s lot of data, because AI cannot magically create data &#8211; it’s only as good as the data you give it.”</p>
<p>“So before you implement it, think about what data you have, what form is it in, and whether it will actually work.”</p>
<p>A simple way to make bots work in your favour is to simply turn your FAQ section of your website into an interactive question and answer bot conversation your customers can engage with to quickly find a solution, Millward suggested.</p>
<p>“You need to think about whether leveraging a bot actually adds value – it might not work on complex complains necessarily  customers,” she said. “But if you can translate your FAQs into an interactive chat and the bot answers the questions your customers ask – then it could work as it gets the answer quickly to your customer.”</p>
<p>AI is also currently working well in a customer service ‘triage’ environment, Millward said. While it might not offer all the answers to customer queries, it can direct the customer down the right channel, whether it is to a bot or a huma.</p>
<p>“This can work quite well because it combines man and machine,” she said. “As long as you don’t pretend it’s not a bot, you need it to clearly say ‘I’m a bot, but I’m handing over now to my human colleague’ – you need think about all these subtle elements to enhance your customer experience.”</p>
<p>“But all these things take time to train. I’m working on three bot projects on the moment. It’s complicated, machines need to learn, process the data, and it takes time. And it might not work.”</p>
<p><strong>Digital butler versus online stalker</strong></p>
<p>Millard is excited about what the future holds for machine learning, but she warned companies to steer away from the ‘creepy stalker factor’.</p>
<p>“I like the digital butler concept, that my technology knows me well and does things to help free my time and brain up to do other things,” she said. “What I don’t like is when my technology becomes like a stalker running behind me constantly tapping me on the shoulder to show me things I don’t necessarily want.</p>
<p>“There’s even camera technology now that can look at your micro-expressions on your face and tell you how you feel. But will it all get too creepy? Are the likes of Google home and Alexa finding out too much about me? Is the technology telling me too much that I don’t want to know? Well that’s where psychologists can come in to rally help shape the tech space to make it more sociable and less intrusive.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-psychology-is-shaping-better-machine-learning/">How psychology is shaping better machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-psychology-is-shaping-better-machine-learning/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
