<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Software technology Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/software-technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/software-technology/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 15 Sep 2020 07:11:41 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>How Artificial Intelligence And Machine Learning Will Make ISR Faster</title>
		<link>https://www.aiuniverse.xyz/how-artificial-intelligence-and-machine-learning-will-make-isr-faster/</link>
					<comments>https://www.aiuniverse.xyz/how-artificial-intelligence-and-machine-learning-will-make-isr-faster/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 15 Sep 2020 07:11:31 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Developing]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Software technology]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11584</guid>

					<description><![CDATA[<p>Source: breakingdefense.com If a swarm of heavily armed fast boats barreled full speed at an aircraft carrier, the crew would have very little time to react. But if that crew had artificial intelligence and machine learning at its disposal, that blitz of boats probably wouldn’t pose nearly as much of a problem.Raytheon Intelligence &#38; Space, <a class="read-more-link" href="https://www.aiuniverse.xyz/how-artificial-intelligence-and-machine-learning-will-make-isr-faster/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-and-machine-learning-will-make-isr-faster/">How Artificial Intelligence And Machine Learning Will Make ISR Faster</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: breakingdefense.com</p>



<p>If a swarm of heavily armed fast boats barreled full speed at an aircraft carrier, the crew would have very little time to react.</p>



<p>But if that crew had artificial intelligence and machine learning at its disposal, that blitz of boats probably wouldn’t pose nearly as much of a problem.<br>Raytheon Intelligence &amp; Space, one of four businesses that form Raytheon Technologies, is using artificial intelligence and machine learning to improve the intelligence, surveillance and reconnaissance capabilities of the U.S. and allied armed forces. The approach is to synthesize reams of data into actionable intelligence and accurate targeting information at speed and scale, in high-risk environments.</p>



<p>“In multi-domain operations, you don’t have full domain superiority but have to exploit what they call moments of superiority in the battlefield,” said Jim Wright, RI&amp;S’ technical director for Intelligence, Surveillance and Reconnaissance Systems. “Speed is a big issue in this defense strategy, which is challenged by huge amounts of data and a limited number of people to look at it.”</p>



<p>A military customer once told Wright the armed services collect 22 football seasons’ worth of video every day. That’s far too much to sift through manually – especially when operators have to make critical decisions quickly, such as protecting ships in crowded sea lanes.</p>



<p>“We’re looking at how machine learning can augment our existing sensor product lines and the question is: ‘How can we utilize machine learning technology to help military commanders make decisions?’” said Shane Zabel, AI Technology Area director for RI&amp;S. “How do we embed some kind of learning machine to go with the sensors to help better execute the mission?”</p>



<p>Traditionally, operators have control sensors and analyzed data much like you’d think they would – by keeping their eyes locked on screens, pressing buttons and using joysticks to move things around. RI&amp;S is using military and commercial advancements in technology to automate those functions.</p>



<p>“AI/ML is at the core of our technology roadmap across Raytheon Intelligence &amp; Space,” said Barbara Borgonovi, vice president of Intelligence Surveillance and Reconnaissance Systems at RI&amp;S. “We are implementing AI/ML into next-generation ISR capabilities so operators can rapidly make the right decisions in any threat environment.”</p>



<p>The business is also developing smart software called Cognitive Aids to Sensor Processing, Exploitation and Response (CASPERTM), to lighten the operator’s workload and use automation to help make decisions faster.</p>



<p>The aids interpret operator requests, then control sensor and data processing functions. They are being integrated into products like the Multi-Spectral Targeting System, which provides visible and infrared intelligence and targeting information for an array of airborne platforms.</p>



<p>CASPER allows operators to work above the drudgery of data processing and instead focus on decision making, resulting in exponentially faster threat response.</p>



<p>Take the fast boat scenario, for example.</p>



<p>“Much like talking to Alexa or Siri, an operator tells CASPER to scan for fast boats and prioritize by threat to the carrier,” Wright said. “CASPER then takes control of sensor functions, rapidly identifies which boats are threats based on things like their appearance and behavior over space and time, and provides the operator with the threat list and recommended courses of action.</p>



<p>“This enables the operator to focus attention on ensuring recommendations are correct and consistent with policy, making the whole process shorter and safer,” he said.</p>



<p>RI&amp;S is developing advanced automation capabilities for ground station systems, and is advancing these capabilities to the leading edge of the sensor grid.</p>



<p>There are thousands of systems and sensors in today’s battlespace. Automation will also help deliver the right data at the right time to make decisions faster through another transformative solution – Joint All Domain Command and Control (JADC2). JADC2 is a future command and control network that will link capabilities and military platforms across the globe in all domains – air, land, sea, cyber and space.</p>



<p>“Machine learning has really taken off,” Zabel said. “It’s all about harnessing the speed potential AI and ML offer.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-and-machine-learning-will-make-isr-faster/">How Artificial Intelligence And Machine Learning Will Make ISR Faster</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-artificial-intelligence-and-machine-learning-will-make-isr-faster/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Microservices architecture is helping organisations in achieving new heights</title>
		<link>https://www.aiuniverse.xyz/microservices-architecture-is-helping-organisations-in-achieving-new-heights/</link>
					<comments>https://www.aiuniverse.xyz/microservices-architecture-is-helping-organisations-in-achieving-new-heights/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 15 Jan 2020 07:16:12 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Software technology]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6155</guid>

					<description><![CDATA[<p>Source: itproportal.com Before we start, let&#8217;s look into a real-life situation that changed the game for a business. Wallmart- Canada was failing on crucial Black Fridays for two years in a row. The I.T. department came after much deliberation tried the microservices architecture and it worked wonders. To understand the concepts of microservices architecture first <a class="read-more-link" href="https://www.aiuniverse.xyz/microservices-architecture-is-helping-organisations-in-achieving-new-heights/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microservices-architecture-is-helping-organisations-in-achieving-new-heights/">Microservices architecture is helping organisations in achieving new heights</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: itproportal.com</p>



<p>Before we start, let&#8217;s look into a real-life situation that changed the game for a business. Wallmart- Canada was failing on crucial Black Fridays for two years in a row. The I.T. department came after much deliberation tried the microservices architecture and it worked wonders. To understand the concepts of microservices architecture first have a brief idea about it.</p>



<h3 class="wp-block-heading" id="what-is-microservices-architecture">What is Microservices architecture?</h3>



<p>Although the concept of Microservices is not new, it has gained popularity in recent years, with companies like Netflix, Amazon, Twitter, and PayPal deciding to implement it. What is so special about microservices architecture? Why are so many companies shifting from monolith to microservices architecture? Let&#8217;s find out. The microservices, aka microservices architecture, is an approach of building software. Under which the developers build a massive software program as a combination of small independent modules. These modules can communicate easily with each other through standard protocols and have well-defined interfaces. The beauty of the microservices architecture is that these different modules can be built on different programming languages by different teams scattered all over the world.</p>



<p>This style is preferred, especially in cases where an application needs to support a broad range of platforms like IoT, wearables, mobile, and web.</p>



<p>In comparison, the monolithic structure does not allow such independence. In the monolithic architecture style, the code components are interconnected. If you want to make changes to a small section of the code, then you need to build and deploy the whole stack at once. The same is true for scalability, if you want to scale a particular part, then you will need to allocate resources to scale up the entire system. Another drawback of the monolith system is that with this type of system, it is challenging to integrate a new technology stack or a new platform or framework into the existing system.</p>



<h3 class="wp-block-heading" id="advantages-of-the-microservices-architecture">Advantages of the Microservices architecture</h3>



<p><strong>More comfortable to Build and Maintain Apps</strong></p>



<p>When you are working to build an extensive and complex software product, it becomes more comfortable if you divide it into small manageable pieces which could be then combined to form the full product. Microservices architecture allows you to run every single module independent of the other module. Every module can be tested, deployed and rebuilt independently.</p>



<p><strong>It’s safe</strong></p>



<p>With microservices, you get a safety net, how? Since a microservice architecture is, in essence, a combination of different modules, if one module fails for some reason, it can be isolated and debugged without affecting the entire software. For example:- if a microservice starts putting excessive load on the processor, you could isolate this particular microservice and debug it.</p>



<p>This helps in the faster solution of the problem as you can identify the problem area quickly.</p>



<p>Security monitoring becomes quite easy if you are using a microservices architecture.&nbsp; Microservices are isolated and independent of each other. Hence it is possible to identify and isolate the particular microservice in which you have found a security flaw. This won&#8217;t affect the entire software.</p>



<p><strong>Re-usability</strong></p>



<p>The design of the microservices architecture is such that its possible to re-use components. This ensures faster deployment times and encourages the developers to organise the coding around business cases and not around projects. This helps in using the components into other similar business cases. This ensures that the product is adaptable across various business cases instead of being limited to one single project.&nbsp;</p>



<p><strong>Easily scalable</strong></p>



<p>Scaling up/down individual modules is far more comfortable than scaling up the entire software structure. This makes the system more adaptable as it can respond to changes quicker than a monolith system. This function is especially helpful when you need to work with many different kinds of platforms and devices. Using microservices, it is pretty easy to scale up, adding new components to the architecture is quick and easy.</p>



<p><strong>Flexibility of technology</strong></p>



<p>The microservices architecture allows you to write each microservice in different technology. Hence you can select the best technology stack for a particular microservice. Moreover, different microservices can communicate easily with each other.</p>



<p><strong>Easier for the developers</strong></p>



<p>Due to its modular structure, the code in a microservices system is easier to understand, and new developers can be easily on-boarded and turned into productive members of the team faster.</p>



<p>Another advantage of microservices architecture is that it is helpful in situations where the developers aren&#8217;t sure about the devices which application will encounter in future.</p>



<p>If the app is not compatible with a particular device, then developers can quickly provide upgrades making the application compatible with that specific device. Microservices architecture supports containers such as Docker.</p>



<p><strong>Faster deployment</strong></p>



<p>In the microservices architecture, each service is independent of each. This allows your developers to code each microservice quickly.&nbsp; As teams can independently work on each micro-service, it eliminates the scenario where one side is waiting for the other team to finish their task so that they could begin theirs.</p>



<p>As each service can be tested independently, the testers could start testing one microservice while the developers are working on the other. This helps in reducing the time to market of the project. Microservices allow for automatic deployment and easier integration of the code. Also, continuous delivery is possible with microservices architecture.</p>



<p><strong>HIPAA and GDPR&nbsp; compliant</strong></p>



<p>The microservices are HIPAA and GDPR obedient by nature. Sensitive data such as personal health information data can be isolated easily, and the developers can control access to this data.</p>



<p>Being HIPAA compliant, the time to market for an application built on the microservices architecture reduces drastically in the case of health software. This happens because you won&#8217;t have to make additional provisions for making the product HIPAA compliant.</p>



<ul class="wp-block-list"><li>Supporting modern applications – why your database matters</li></ul>



<h3 class="wp-block-heading" id="build-globally">Build globally</h3>



<p>Independent, cross-functional, global teams have become a reality with microservices. It can prove to be a challenge if you want to leverage the expertise of a worldwide team in a monolith system.</p>



<p>Instead, you could give these teams the freedom of choosing their microservice to work upon and then, in the end, connect the dots to finish the project.&nbsp;</p>



<p>Companies who have successfully implemented the microservices architecture:</p>



<p><strong>Spotify</strong></p>



<p>Spotify knew that to stay relevant in this fast-paced digital world, they would need to become nimble-footed. Spotify caters to millions of users every month, and the processes running behind the scenes to keep the show running are incredibly complex.</p>



<p>The specific problem that Spotify faced was that of scaling. Spotify decided to use microservices architecture to address this problem. They needed a system in which the deployment of individual components was possible. Microservices architecture gave them this freedom.</p>



<p>Spotify made autonomous teams to solve this problem within their organisation. This resulted in the creation of multiple teams consisting of hundreds of developers spread across two continents. They were able to solve the problem by leveraging the power of microservices architecture.</p>



<p><strong>Uber</strong></p>



<p>Uber had a monolithic structure in the beginning, but as the service grew, so did the demands of the customers. Along with it, the complexity of the process also increased. This demanded an architecture which could respond to the growing demands of the customers fast.</p>



<p>Uber wanted to use different programming languages and frameworks to grow its application. The microservices architecture helped the company in achieving this feat. More than 1300 microservices power the taxi aggregator today.</p>



<h3 class="wp-block-heading" id="things-to-consider-before-implementing-the-microservices-architecture-in-your-enterprise">Things to consider before implementing the microservices architecture in your enterprise</h3>



<p>The microservices architecture can prove to be extremely beneficial for large companies who have a vast and complicated process to manage. But for small businesses and startups, it would prove helpful to go for the traditional monolithic structure.</p>



<p>You will need a robust database management system for implementing the microservices architecture. The best way forward would be to dissect the database into small sets with each microservice getting its database and domain data. Doing this would help you in deploying the microservices individually.</p>



<p>While in a monolithic structure, you use only one programming language to finish the task, the microservices architecture allows you to use more than one language. Hence it is essential that you employ a team which consists of people who are well-versed in a diverse set of programming skills. Consider discussing the tech stack with your team beforehand to avoid problems afterward.</p>



<p>Ideally, you should divide your big team into many smaller teams. This would allow them to work more independently.</p>



<h3 class="wp-block-heading" id="conclusion">Conclusion</h3>



<p>While there are many advantages of deploying a microservices-based architecture, it could prove to be a real headache managing different teams, frameworks, programming languages, and data storage solutions especially if you are doing this first time.</p>



<p>It would be wise to employ efficient software/app development companies who would guide you in implementing the system properly so that you can derive maximum benefit from it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/microservices-architecture-is-helping-organisations-in-achieving-new-heights/">Microservices architecture is helping organisations in achieving new heights</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microservices-architecture-is-helping-organisations-in-achieving-new-heights/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Paging Dr. Robot: Artificial intelligence moves into care</title>
		<link>https://www.aiuniverse.xyz/paging-dr-robot-artificial-intelligence-moves-into-care/</link>
					<comments>https://www.aiuniverse.xyz/paging-dr-robot-artificial-intelligence-moves-into-care/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 25 Nov 2019 05:12:28 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computer programs]]></category>
		<category><![CDATA[Robots]]></category>
		<category><![CDATA[Software technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5379</guid>

					<description><![CDATA[<p>Source:-go.com The next time you get sick, your care may involve a form of the technology people use to navigate road trips or pick the right vacuum cleaner online. Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to <a class="read-more-link" href="https://www.aiuniverse.xyz/paging-dr-robot-artificial-intelligence-moves-into-care/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/paging-dr-robot-artificial-intelligence-moves-into-care/">Paging Dr. Robot: Artificial intelligence moves into care</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-go.com<br></p>



<p>The next time you get sick, your care may involve a form of the technology people use to navigate road trips or pick the right vacuum cleaner online.</p>



<p>Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients.</p>



<p>It already detects an eye disease tied to diabetes and does other behind-the-scenes work like helping doctors interpret MRI scans and other imaging tests for some forms of cancer.</p>



<p>Now, parts of the health system are starting to use it directly with patients. During some clinic and telemedicine appointments, AI-powered software asks patients initial questions about their symptoms that physicians or nurses normally pose.</p>



<p>And an AI program featuring a talking image of the Greek philosopher Aristotle is starting to help University of Southern California students cope with stress.</p>



<p>Researchers say this push into medicine is at an early stage, but they expect the technology to grow by helping people stay healthy, assisting doctors with tasks and doing more behind-the-scenes work. They also think patients will get used to AI in their care just like they’ve gotten accustomed to using the technology when they travel or shop.</p>



<p>But they say there are limits. Even the most advanced software has yet to master important parts of care like a doctor’s ability to feel compassion or use common sense.</p>



<p>“Our mission isn’t to replace human beings where only human beings can do the job,” said University of Southern California research professor Albert Rizzo.</p>



<p>Rizzo and his team have been working on a program that uses AI and a virtual reality character named “Ellie” that was originally designed to determine whether veterans returning from a deployment might need therapy.</p>



<p>Ellie appears on computer monitors and leads a person through initial questions. Ellie makes eye contact, nods and uses hand gestures like a human therapist. It even pauses if the person gives a short answer, to push them to say more.</p>



<p>“After the first or second question, you kind of forget that it&#8217;s a robot,&#8221; said Cheyenne Quilter, a West Point cadet helping to test the program.</p>



<p>Ellie does not diagnose or treat. Instead, human therapists used recordings of its sessions to help determine what the patient might need.</p>



<p>“This is not AI trying to be your therapist,” said another researcher, Gale Lucas. “This is AI trying to predict who is most likely to be suffering.”</p>



<p>The team that developed Ellie also has put together a newer AI-based program to help students manage stress and stay healthy.</p>



<p>Ask Ari is making its debut at USC this semester to give students easy access to advice on dealing with loneliness, getting better sleep or handling other complications that crop up in college life.</p>



<p>Ari does not replace a therapist, but its designers say it will connect students through their phones or laptops to reliable help whenever they need it</p>



<p>USC senior Jason Lewis didn’t think the program would have much for him when he helped test it because he wasn’t seeking counseling. But he found that Ari covered many topics he could relate to, including information on how social media affects people.</p>



<p>“Everybody thinks they are alone in their thoughts and problems,” he said. “Ari definitely counters that isolation.”</p>



<p>Aside from addressing mental health needs, artificial intelligence also is at work in more common forms of medicine.</p>



<p>The tech company AdviNOW Medical and 98point6, which provides treatment through secure text messaging, both use artificial intelligence to question patients at the beginning of an appointment.</p>



<p>AdviNOW CEO James Bates said their AI program decides what questions to ask and what information it needs. It passes that information and a suggested diagnosis to a physician who then treats the patient remotely through telemedicine.</p>



<p>The company currently uses the technology in a handful of Safeway and Albertsons grocery store clinics in Arizona and Idaho. But it expects to expand to about 1,000 clinics by the end of next year.</p>



<p>Eventually, the company wants to have AI diagnose and treat some minor illnesses, Bates said</p>



<p>Researchers say much of AI’s potential for medicine lies in what it can do behind the scenes by examining large amounts of data or images to spot problems or predict how a disease will develop, sometimes quicker than a doctor.</p>



<p>Future uses might include programs like one that hospitals currently use to tell doctors which patients are more likely to get sepsis, said Darren Dworkin, chief information officer at California’s Cedars-Sinai medical center. Those warnings can help doctors prevent the deadly illness or treat it quickly.</p>



<p>&#8220;It’s basically that little tap on the shoulder that we all want to get of, ‘Hey, perhaps you should look over here,’” Dworkin said.</p>



<p>Dr. Eric Topol predicts in his book “Deep Medicine” that artificial intelligence will change medicine, in part by freeing doctors to spend more time with patients. But he also notes that the technology will not take over care.</p>



<p>Even the most advanced program cannot replicate empathy, Topol said. Patients stick to their treatment and prescriptions more and do better if they know their doctor is pulling for them.</p>



<p>Artificial intelligence also can’t process everything a doctor considers when deciding on treatment, noted Harvard Medical School’s Dr. Isaac Kohane. That might include a patient’s tolerance for pain or the desire to live a few more months to attend a child’s wedding or graduation.</p>



<p>“Good doctors are the ones who understand us and our goals as human beings,” he said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/paging-dr-robot-artificial-intelligence-moves-into-care/">Paging Dr. Robot: Artificial intelligence moves into care</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/paging-dr-robot-artificial-intelligence-moves-into-care/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Alphabet X’s new Everyday Robot project wants to build robots that can learn from the world around them</title>
		<link>https://www.aiuniverse.xyz/alphabet-xs-new-everyday-robot-project-wants-to-build-robots-that-can-learn-from-the-world-around-them/</link>
					<comments>https://www.aiuniverse.xyz/alphabet-xs-new-everyday-robot-project-wants-to-build-robots-that-can-learn-from-the-world-around-them/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 23 Nov 2019 05:46:19 +0000</pubDate>
				<category><![CDATA[Data Robot]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Robots learn]]></category>
		<category><![CDATA[Software technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5355</guid>

					<description><![CDATA[<p>Source:-theverge.comToday, Alphabet’s X moonshot division (formerly known as Google X) unveiled the Everyday Robot project, whose goal is to develop a “general-purpose learning robot.” The idea is that its robots could use cameras and complex machine learning algorithms to see and learn from the world around them without needing to be coded for every individual <a class="read-more-link" href="https://www.aiuniverse.xyz/alphabet-xs-new-everyday-robot-project-wants-to-build-robots-that-can-learn-from-the-world-around-them/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/alphabet-xs-new-everyday-robot-project-wants-to-build-robots-that-can-learn-from-the-world-around-them/">Alphabet X’s new Everyday Robot project wants to build robots that can learn from the world around them</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-theverge.com<br>Today, Alphabet’s X moonshot division (formerly known as Google X) unveiled the Everyday Robot project,  whose goal is to develop a “general-purpose learning robot.” The idea  is that its robots could use cameras and complex machine learning  algorithms to see and learn from the world around them without needing  to be coded for every individual movement. </p>



<p>The team is testing robots that can help out in workplace
 environments, though right now, these early robots are focused on 
learning how to sort trash. Here’s what one of them looks like — it 
reminds me of a very tall, one-armed Wall-E (ironic, given what the 
robots are tasked to do):</p>



<p>Here’s a GIF of a robot actually sorting a recyclable can
 from a compost pile to a recycling pile. This is wild — check out how 
the arm actually grasps the can:</p>



<p>The concept of grasping something comes pretty easily to  most humans, but it’s a very challenging thing to teach a robot, and  Everyday Robot’s robots get their practice in both the physical world  and the virtual world. In a tour of X’s offices, <em>Wired </em>described  how a “playpen” of nearly 30 of the robots (supervised by humans) spend  their daytime hours sorting trash into trays for compost, landfill, and  recycling. At night, Everyday Robot has virtual robots practice  grabbing things in simulated buildings, according to <em>Wired</em>.  That simulated data is then combined with the real world data, which is  given to the robots in a system update every week or two. </p>



<p>With
 all that practice, X says the robots are actually getting pretty good 
at sorting, apparently putting less than 5 percent of trash in the wrong
 place (X’s humans put 20 percent of trash in the wrong pile, according 
to X). </p>



<p>That doesn’t mean they’re remotely ready to replace human janitors, though. <em>Wired</em>
 observed one robot grasping thin air instead of the bowl in front of 
it, then attempting to put the “bowl” down. Another lost one of its 
“finger” during the demo. Engineers also told <em>Wired</em> that, at 
one point, some robots weren’t moving through a building because some 
types of light caused their sensors to hallucinate holes in the floor.</p>



<p>There are whole startups dedicated to the problem of teaching a robot how to grasp, such as Embodied Intelligence and the nonprofit OpenAI. And Google, also owned by Alphabet, has done research into grasping — check out this 2016 video of some Google-made robot arms trying to grab differently-sized objects:</p>



<p>But progress is being made beyond the work X and Google  are doing. For example, Boston Dynamics (formerly owned by Google)  released this video in 2018 of its SpotMini robot grabbing a doorknob to  open a door for a friend:</p>



<p>And research from Google from this March showed off a robot that could pick up objects and, over time, learn the best way to throw a specific shape:</p>



<p>Despite all this research, Google and Alphabet have a  troubled history with robotics. Google’s last serious attempt at  robotics work started in 2013 in a division led by Android co-founder  Andy Rubin. Though that division made some high-profile acquisitions, including Boston Dynamics, nothing concrete came from it, and Rubin departed from Google in 2014 following allegations of sexual harassment. Google is apparently dipping its toes back into robotics, though, based on a report from March of this year, and its new robots are also learning how to grab, but it seems Google’s work is different from that of Everyday Robot’s.</p>



<p>Everyday Robot lead Hans Peter Brondmo told <em>Wired</em>  that he hopes to one day make a robot that can assist the elderly. But  he also acknowledged something like that might be a few years out — so  for now, it seems the robots will keep getting better at sorting trash.</p>
<p>The post <a href="https://www.aiuniverse.xyz/alphabet-xs-new-everyday-robot-project-wants-to-build-robots-that-can-learn-from-the-world-around-them/">Alphabet X’s new Everyday Robot project wants to build robots that can learn from the world around them</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/alphabet-xs-new-everyday-robot-project-wants-to-build-robots-that-can-learn-from-the-world-around-them/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why philosophers believe we’ve reached peak human intelligence</title>
		<link>https://www.aiuniverse.xyz/why-philosophers-believe-weve-reached-peak-human-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/why-philosophers-believe-weve-reached-peak-human-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 Nov 2019 05:55:39 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Global challenges]]></category>
		<category><![CDATA[IT skills]]></category>
		<category><![CDATA[Software technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5236</guid>

					<description><![CDATA[<p>Source:-thenextweb.com Despite huge advances in science over the past century, our understanding of nature is still far from complete. Not only have scientists failed to find the Holy Grail of physics – unifying the very large (general relativity) with the very small (quantum mechanics) – they still don’t know what the vast majority of the universe is made up of. The sought <a class="read-more-link" href="https://www.aiuniverse.xyz/why-philosophers-believe-weve-reached-peak-human-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-philosophers-believe-weve-reached-peak-human-intelligence/">Why philosophers believe we’ve reached peak human intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-thenextweb.com<br></p>



<p>Despite huge advances in science over the past century, our understanding of nature is still far from complete. Not only have scientists failed to find the Holy Grail of physics – unifying the very large (general relativity) with the very small (quantum mechanics) – they still don’t know what the vast majority of the universe is made up of. The sought after Theory of Everything continues to elude us. And there are other outstanding puzzles, too, such as how consciousness arises from mere matter.</p>



<p>Will science ever be able to provide all the answers? Human brains are the product of blind and unguided evolution. They were designed to solve practical problems impinging on our survival and reproduction, not to unravel the fabric of the universe. This realization has led some philosophers to embrace a curious form of pessimism, arguing there are bound to be things we will never understand. Human science will therefore one day hit a hard limit – and may already have done so.Rented shoes are grossBut bowling is fun! Join us for Bowlr, Amsterdam’s best networking eventYEAH!</p>



<p>Some questions may be doomed to remain what the American linguist and philosopher Noam Chomsky called “mysteries”. If you think that humans alone have unlimited cognitive powers – setting us apart from all other animals – you have not fully digested Darwin’s insight that <em>Homo Sapiens</em> is very much part of the natural world.</p>



<p>But does this argument really hold up? Consider that human brains did not evolve to discover their own origins either. And yet somehow we managed to do just that. Perhaps the pessimists are missing something.</p>



<h2 class="wp-block-heading">Mysterian arguments</h2>



<p>“Mysterian” thinkers give a prominent role to biological arguments and analogies. In his 1983 landmark book The Modularity of Mind, the late philosopher Jerry Fodor claimed that there are bound to be “thoughts that we are unequipped to think”.</p>



<p>Similarly, the philosopher Colin McGinn has argued in a series of books and articles that all minds suffer from “cognitive closure” with respect to certain problems. Just as dogs or cats will never understand prime numbers, human brains must be closed off from some of the world’s wonders. McGinn suspects that the reason why philosophical conundrums such as the mind/body problem – how physical processes in our brain give rise to consciousness – prove to be intractable is that their true solutions are simply inaccessible to the human mind.</p>



<p>If McGinn is right that our brains are simply not equipped to solve certain problems, there is no point in even trying, as they will continue to baffle and bewilder us. McGinn himself is convinced that there is, in fact, a perfectly natural solution to the mind–body problem, but that human brains will never find it.</p>



<p>Even the psychologist Steven Pinker, someone who is often accused of scientific hubris himself, is sympathetic to the argument of the mysterians. If our ancestors had no need to understand the wider cosmos in order to spread their genes, he argues, why would natural selection have given us the brainpower to do so?</p>



<h2 class="wp-block-heading">Mind-boggling theories</h2>



<p>Mysterians typically present the question of cognitive limits in stark, black-or-white terms: either we can solve a problem, or it will forever defy us. Either we have cognitive access or we suffer from closure. At some point, human inquiry will suddenly slam into a metaphorical brick wall, after which we will be forever condemned to stare in blank incomprehension.</p>



<p>Another possibility, however, which mysterians often overlook, is one of slowly diminishing returns. Reaching the limits of inquiry might feel less like hitting a wall than getting bogged down in a quagmire. We keep slowing down, even as we exert more and more effort, and yet there is no discrete point beyond which any further progress at all becomes impossible.</p>



<p>There is another ambiguity in the thesis of the mysterians, which my colleague Michael Vlerick and I have pointed out in an academic paper. Are the mysterians claiming that we will never find the true scientific theory of some aspect of reality, or alternatively, that we may well find this theory but will never truly comprehend it?</p>



<p>In the science fiction series The Hitchhiker’s Guide to The Galaxy, an alien civilization builds a massive supercomputer to calculate the Answer to the Ultimate Question of Life, the Universe and Everything. When the computer finally announces that the answer is “42”, no one has a clue what this means (in fact, they go on to construct an even bigger supercomputer to figure out precisely this).</p>



<p>Is a question still a “mystery” if you have arrived at the correct answer, but you have no idea what it means or cannot wrap your head around it? Mysterians often conflate those two possibilities.</p>



<p>In some places, McGinn suggests that the mind–body problem is inaccessible to human science, presumably meaning that we will never find the true scientific theory describing the mind–body nexus. At other moments, however, he writes that the problem will always remain “numbingly difficult to make sense of” for human beings, and that “the head spins in theoretical disarray” when we try to think about it.</p>



<p>This suggests that we may well arrive at the true scientific theory, but it will have a 42-like quality to it. But then again, some people would argue that this is already true of a theory like quantum mechanics. Even the quantum physicist Richard Feynman admitted, “I think I can safely say that nobody understands quantum mechanics.”</p>



<p>Would the mysterians say that we humans are “cognitively closed” to the quantum world? According to quantum mechanics, particles can be in two places at once, or randomly pop out of empty space. While this is extremely hard to make sense of, quantum theory leads to incredibly accurate predictions. The phenomena of “quantum weirdness” have been confirmed by several experimental tests, and scientists are now also creating applications based on the theory.</p>



<p>Mysterians also tend to forget how mindboggling some earlier scientific theories and concepts were when initially proposed. Nothing in our cognitive make-up prepared us for relativity theory, evolutionary biology or heliocentrism.</p>



<p>As the philosopher Robert McCauley writes: “When first advanced, the suggestions that the Earth moves, that microscopic organisms can kill human beings, and that solid objects are mostly empty space were no less contrary to intuition and common sense than the most counterintuitive consequences of quantum mechanics have proved for us in the twentieth century.” McCauley’s astute observation provides reason for optimism, not pessimism.</p>



<h2 class="wp-block-heading">Mind extensions</h2>



<p>But can our puny brains really answer all conceivable questions and understand all problems? This depends on whether we are talking about bare, unaided brains or not. There’s a lot of things you can’t do with your naked brain. But&nbsp;<em>Homo Sapiens</em>&nbsp;is a tool-making species, and this includes a range of cognitive tools.</p>



<p>For example, our unaided sense organs cannot detect UV-light, ultrasound waves, X-rays or gravitational waves. But if you’re equipped with some fancy technology you&nbsp;<em>can</em>&nbsp;detect all those things. To overcome our perceptual limitations, scientists have developed a suite of tools and techniques: microscopes, X-ray film, Geiger counters, radio satellites detectors and so forth.</p>



<p>All these devices extend the reach of our minds by “translating” physical processes into some format that our sense organs can digest. So are we perceptually “closed” to UV light? In one sense, yes. But not if you take into account all our technological equipment and measuring devices.</p>



<p>In a similar way, we use physical objects (such as paper and pencil) to vastly increase the memory capacity of our naked brains. According to the British philosopher Andy Clark, our minds quite literally extend beyond our skins and skulls, in the form of notebooks, computers screens, maps and file drawers.</p>



<p>Mathematics is another fantastic mind-extension technology, which enables us to represent concepts that we couldn’t think of with our bare brains. For instance, no scientist could hope to form a mental representation of all the complex interlocking processes that make up our climate system. That’s exactly why we have constructed mathematical models and computers to do the heavy lifting for us.</p>



<h2 class="wp-block-heading">Cumulative knowledge</h2>



<p>Most importantly, we can extend our own minds to those of our fellow human beings. What makes our species unique is that we are capable of culture, in particular cumulative cultural knowledge. A population of human brains is much smarter than any individual brain in isolation.</p>



<p>And the collaborative enterprise par excellence is science. It goes without saying that no single scientist would be capable of unravelling the mysteries of the cosmos on her own. But collectively, they do. As Isaac Newton wrote, he could see further by “standing on the shoulders of giants”. By collaborating with their peers, scientists can extend the scope of their understanding, achieving much more than any of them would be capable of individually.</p>



<p>Today, fewer and fewer people understand what is going on at the cutting edge of theoretical physics – even physicists. The unification of quantum mechanics and relativity theory will undoubtedly be exceptionally daunting, or else scientists would have nailed it long ago already.</p>



<p>The same is true for our understanding of how the human brain gives rise to consciousness, meaning and intentionality. But is there any good reason to suppose that these problems will forever remain out of reach? Or that our sense of bafflement when thinking of them will never diminish?</p>



<p>In a public debate I moderated a few years ago, the philosopher Daniel Dennett pointed out a very simple objection to the mysterians’ analogies with the minds of other animals: other animals cannot even understand the questions. Not only will a dog never figure out if there’s a largest prime, but it will never even understand the question. By contrast, human beings can pose questions to each other and to themselves, reflect on these questions, and in doing so come up with ever better and more refined versions.</p>



<p>Mysterians are inviting us to imagine the existence of a class of questions that are themselves perfectly comprehensible to humans, but the answers to which will forever remain out of reach. Is this notion really plausible (or even coherent)?</p>



<h2 class="wp-block-heading">Alien anthropologists</h2>



<p>To see how these arguments come together, let’s do a thought experiment. Imagine that some extraterrestrial “anthropologists” had visited our planet around 40,000 years ago to prepare a scientific report about the cognitive potential of our species. Would this strange, naked ape ever find out about the structure of its solar system, the curvature of space-time or even its own evolutionary origins?</p>



<p>At that moment in time, when our ancestors were living in small bands of hunter-gatherers, such an outcome may have seemed quite unlikely. Although humans possessed quite extensive knowledge about the animals and plants in their immediate environment, and knew enough about the physics of everyday objects to know their way around and come up with some clever tools, there was nothing resembling scientific activity.</p>



<p>There was no writing, no mathematics, no artificial devices for extending the range of our sense organs. As a consequence, almost all of the beliefs held by these people about the broader structure of the world were completely wrong. Human beings didn’t have a clue about the true causes of natural disaster, disease, heavenly bodies, the turn of the seasons or almost any other natural phenomenon.</p>



<p>Our extraterrestrial anthropologist might have reported the following:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>Evolution has equipped this upright, walking ape with primitive sense organs to pick up some information that is locally relevant to them, such as vibrations in the air (caused by nearby objects and persons) and electromagnetic waves within the 400-700 nanometer range, as well as certain larger molecules dispersed in their atmosphere.</p></blockquote>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>However, these creatures are completely oblivious to anything that falls outside their narrow perceptual range. Moreover, they can’t even see most of the single-cell life forms in their own environment, because these are simply too small for their eyes to detect. Likewise, their brains have evolved to think about the behavior of medium-sized objects (mostly solid) under conditions of low gravity.</p><p>None of these earthlings has ever escaped the gravitational field of their planet to experience weightlessness, or been artificially accelerated so as to experience stronger gravitational forces. They can’t even conceive of space-time curvature, since evolution has hard-wired zero-curvature geometry of space into their puny brains.</p><p>In conclusion, we’re sorry to report that most of the cosmos is simply beyond their ken.</p></blockquote>



<p>But those extraterrestrials would have been dead wrong. Biologically, we are no different than we were 40,000 years ago, but now we know about bacteria and viruses, DNA and molecules, supernovas and black holes, the full range of the electromagnetic spectrum and a wide array of other strange things.</p>



<p>We also know about non-Euclidean geometry and space-time curvature, courtesy of Einstein’s general theory of relativity. Our minds have “reached out” to objects millions of light years away from our planet, and also to extremely tiny objects far below the perceptual limits of our sense organs. By using various tricks and tools, humans have vastly extended their grasp on the world.</p>



<h2 class="wp-block-heading">The verdict: biology is not destiny</h2>



<p>The thought experiment above should be a counsel against pessimism about human knowledge. Who knows what other mind-extending devices we will hit upon to overcome our biological limitations? Biology is not destiny. If you look at what we have already accomplished in the span of a few centuries, any rash pronouncements about cognitive closure seem highly premature.</p>



<p>Mysterians often pay lip service to the values of “humility” and “modesty”, but on closer examination, their position is far less restrained than it appears. Take McGinn’s confident pronouncement that the mind–body problem is “an ultimate mystery” that we will “never unravel”. In making such a claim, McGinn assumes knowledge of three things: the nature of the mind–body problem itself, the structure of the human mind, and the reason why never the twain shall meet. But McGinn offers only a superficial overview of the science of human cognition, and pays little or no attention to the various devices for mind extension.</p>



<p>I think it’s time to turn the tables on the mysterians. If you claim that some problem will forever elude human understanding, you have to show in some detail why no possible combination of mind extension devices will bring us any closer to a solution. That is a taller order than most mysterians have acknowledged.</p>



<p>Moreover, by spelling out exactly why some problems will remain mysterious, mysterians risk being hoisted by their own petard. As Dennett wrote in his latest book: “As soon as you frame a question that you claim we will never be able to answer, you set in motion the very process that might well prove you wrong: you raise a topic of investigation.”</p>



<p>In one of his infamous memorandum notes on Iraq, former US secretary of defense, Donald Rumsfeld, makes a distinction between two forms of ignorance: the “known unknowns” and “unknown unknowns”. In the first category belong the things that we know we don’t know. We can frame the right questions, but we haven’t found the answers yet. And then there are the things that “we don’t know we don’t know”. For these unknown unknowns, we can’t even frame the questions yet.</p>



<p>It is quite true that we can never rule out the possibility that there are such unknown unknowns, and that some of them will forever remain unknown, because for some (unknown) reason human intelligence is not up to the task.</p>



<p>But the important thing to note about these unknown unknowns is that nothing can be said about them. To presume from the outset that some unknown unknowns will always remain unknown, as mysterians do, is not modesty – it’s arrogance.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-philosophers-believe-weve-reached-peak-human-intelligence/">Why philosophers believe we’ve reached peak human intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-philosophers-believe-weve-reached-peak-human-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Firms Turn to Big Data to Find Deals</title>
		<link>https://www.aiuniverse.xyz/firms-turn-to-big-data-to-find-deals/</link>
					<comments>https://www.aiuniverse.xyz/firms-turn-to-big-data-to-find-deals/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Oct 2018 05:46:52 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[business intelligence]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[DealCloud]]></category>
		<category><![CDATA[Software technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2941</guid>

					<description><![CDATA[<p>Source- penews.com Private-equity firms have long relied on human connections to find deals, sending partners to meet thousands of executives every year and turning to their professional networks for investment ideas. But they are increasingly applying data science and algorithms to drive their investments. They’re using software to scrutinise companies’ strengths and weaknesses to spot <a class="read-more-link" href="https://www.aiuniverse.xyz/firms-turn-to-big-data-to-find-deals/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/firms-turn-to-big-data-to-find-deals/">Firms Turn to Big Data to Find Deals</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source- <a href="http://penews.com" target="_blank" rel="noopener">penews.com</a></p>
<div class="module">
<div class="zonedModule" data-module-id="12" data-module-name="article.app/lib/module/articleBody" data-module-zone="article_body">
<div id="pen-article-wrap" class="article-wrap" data-sbid="PN5000002808">
<p>Private-equity firms have long relied on human connections to find deals, sending partners to meet thousands of executives every year and turning to their professional networks for investment ideas.</p>
<p>But they are increasingly applying data science and algorithms to drive their investments. They’re using software to scrutinise companies’ strengths and weaknesses to spot potential investments. They are deciding what to pay for businesses by analysing their past bids. And they’re looking to use programs to figure out how companies will perform down the road.</p>
<div class="paywall">
<p>“We as an industry spend a lot of time manually gathering data and manually doing predictions. And of course that’s better done by technology,” says Olof Hernell, chief digital officer at Swedish private-equity firm EQT.</p>
<p><strong>Starting out</strong><br />
Private-equity firms are late to the party in their use of high-tech analysis. Venture-capital firms, which have deep roots in the tech business and Silicon Valley, have taken big strides in incorporating data science and predictive analytics into investment decisions.</p>
<p>Yet about 75% of private-equity surveyed in 2017 said they struggled to gauge how using data science could affect the value of portfolio companies, according to Ernst &amp; Young LLP.</p>
<p>Now that capital has poured into private equity and competition for investments has intensified, these firms are starting to turn to data analytics to gain an edge in finding deals or ruling them out. About 94% of firms expect to use more predictive analytics within the next two years, according to Ernst &amp; Young.</p>
<p>Some of the biggest firms in the field are getting into analytics. Publicly traded Blackstone Group, for instance, disclosed on a February earnings call that it has built its own data platform to help inform its investment decision-making.</p>
<p>Using data science has already proved to be useful for other firms. Midmarket buyout firm Falfurrias Capital Partners developed its internal data-science platform,</p>
<p>DealCloud, in 2010 before spinning it out as a separate company and selling it to software provider Intapp Inc.</p>
<p>Falfurrias, of Charlotte, N.C., uses data analytics for industry research or to find smaller businesses that it can acquire and merge into large companies it already owns.</p>
<p>For instance, after Falfurrias officials met executives of SixAxis LLC in 2016, the private-equity firm used DealCloud to track the manufacturing-technology company’s performance. Falfurrias opted to acquire the business last year.</p>
<p>When Falfurrias-backed Marquis Software Solutions Inc. bought DocuMatrix Inc. in 2017, Falfurrias used DealCloud to analyse the financial-software industry and measure DocuMatrix’s strengths against that of its competitors. Falfurrias had already spotted DocuMatrix as an acquisition target, but DealCloud enabled the firm to better grasp its business model and its fit with Marquis.</p>
<p>In some cases, Falfurrias inputs data on its past bids and uses the analytics programme to decide how much it should pay for assets, the firm said. “Relationships are a big part of [private equity] and investment banking, but if you can overlay that with data, that becomes so much more powerful,” says Rob Cummings, chief technology officer and managing director at Falfurrias.</p>
<p><strong>Broader applications</strong><br />
Mr. Cummings sees broader applications for the technology. The firm eventually wants to apply the data to predict company performance and valuations to aid in selecting investment targets, he says.</p>
<p>EQT first launched its Motherbrain software platform roughly three years ago within its EQT Ventures division, a venture-capital unit that invests in young technology-driven companies, primarily in Europe. Today, nearly every investment EQT Ventures makes is at least partly sourced by Motherbrain. EQT’s private-equity arm began to use the software more recently.</p>
<p>The platform combines information EQT has gathered about thousands of companies on its own with data from a range of traditional sources, including industry research reports and company websites to find patterns among successful companies.</p>
<p>The software then uses an algorithm to identify businesses with desirable characteristics—such as a solid trajectory of growth and an ability to establish itself as a market leader—that executives can then contact. It also allows the firm to prioritise its pipeline of potential investment targets, Mr. Hernell says.</p>
<p>“Out of all the companies you could potentially look at, it’s pretty likely that we would like this company based on our [learning] and decisions we have made on the thousands of companies that we have already assessed,” he said.</p>
<p>Still, widespread adoption of the technology across the private-equity industry may require a shift in mind-set. Some private-equity professionals say they see the value of integrating data analytics into their workflow, but question the necessity of investing in the technological infrastructure when they’ve had success by using their instinct and connections.</p>
<p>“This is a journey that we’re just beginning, and while we’ve got a team, it’ll take more investment in the team,” Tony James, executive vice chairman of Blackstone, said during the company’s February earnings call. “It’ll take somewhat of a cultural change,” Mr. James said. “It’ll take education around our people.”</p>
<p>Some private-equity investors say that finding the right balance between human judgement and machine learning will take time.</p>
<p>“I’ve never viewed it as an either-or,” says Eric Bradlow, a professor at the Wharton School of the University of Pennsylvania who founded data-research centre Wharton Customer Analytics Initiative in 2006.</p>
<p>“Analytics is a form of business intelligence,” he said.</p>
</div>
</div>
</div>
</div>
<div id="bottom-of-article"></div>
<div class="spot-im-registration-module" data-spot-id="sp_fcnme1UL"></div>
<div class="spot-im-conversation-module sppre_conversation-module" data-spot-id="sp_fcnme1UL" data-spotim-module="deferred-launcher" data-post-id="PN5000002808" data-ready="true" data-module-element-id="885395b1f95e53e9e73def2d682b0c55"></div>
<p>The post <a href="https://www.aiuniverse.xyz/firms-turn-to-big-data-to-find-deals/">Firms Turn to Big Data to Find Deals</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/firms-turn-to-big-data-to-find-deals/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
	</channel>
</rss>
