<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Machine Learning models Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/machine-learning-models/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/machine-learning-models/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 25 Nov 2019 05:23:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Robot debates humans about the dangers of artificial intelligence</title>
		<link>https://www.aiuniverse.xyz/robot-debates-humans-about-the-dangers-of-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/robot-debates-humans-about-the-dangers-of-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 25 Nov 2019 05:23:02 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[IBM]]></category>
		<category><![CDATA[Machine Learning models]]></category>
		<category><![CDATA[robot]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5382</guid>

					<description><![CDATA[<p>Source:-newscientist.com An artificial intelligence has debated with humans about the the dangers of AI – narrowly convincing audience members that AI will do more good than harm. <a class="read-more-link" href="https://www.aiuniverse.xyz/robot-debates-humans-about-the-dangers-of-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/robot-debates-humans-about-the-dangers-of-artificial-intelligence/">Robot debates humans about the dangers of artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-newscientist.com<br></p>



<p>An artificial intelligence has debated with humans about the the dangers of AI – narrowly convincing audience members that AI will do more good than harm.</p>



<p>Project Debater, a robot developed by IBM, debated on both sides of the argument, with two human team mates for each side helping it out. Speaking in a female American voice to a crowd at the University of Cambridge Union on Thursday evening, the AI gave each side’s opening statements, using arguments drawn from more than 1100 human submissions ahead of time.</p>



<p>On the proposition side, arguing that AI will bring more harm than good, Project Debater’s opening remarks were darkly ironic. “AI can cause a lot of harm,” it said. “AI will not be able to make a decision that is the morally correct one, because morality is unique to humans.”</p>



<p>“AI companies still have too little expertise on how to properly assess datasets and filter out bias,” it added. “AI will take human bias and will fixate it for generations.”<br>
Read more: The hardest thing about robots? Teaching them to cope with us</p>



<p>The AI used an application known as “speech by crowd” to generate its arguments, analysing arguments people sent in online. Project Debater then sorted the submissions into key themes, as well as identifying redundancy – which submissions made the same point using different words.<br>
Project Debater<br>
Project Debater summarised arguments put forward by humans</p>



<p>IBM</p>



<p>The AI was coherent in its arguments but had a few slip-ups. Sometimes it repeated itself – while talking about the ability of AI to perform mundane and repetitive tasks, for example – and it didn’t provide detailed examples to support its claims.</p>



<p>While debating on the opposition side, which was advocating for the overall benefits of AI, Project Debater argued that AI would create new jobs in certain sectors and “bring a lot more efficiency to the workplace”.</p>



<p>But then it made a point that was counter to its argument: “AI capabilities caring for patients or robots teaching school children – there is no longer a demand for humans in those fields either.”<br>
Read more: Want to build robots and invent stuff? Here’s where to start</p>



<p>The opposition team narrowly won, winning 51.22 per cent of the audience vote.</p>



<p>Project Debater argued with humans for the first time last year, and in February lost in a one-on-one against champion debater Harish Natarajan, who also spoke at Cambridge as the third speaker for the opposition team.</p>



<p>IBM has plans to use the speech-by-crowd AI as a tool for collecting feedback from large numbers of people. For instance, it could be used by governments seeking opinions about policies from constituents, or by companies wanting input from employees, said IBM engineer Noam Slonim.</p>



<p>“This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/robot-debates-humans-about-the-dangers-of-artificial-intelligence/">Robot debates humans about the dangers of artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/robot-debates-humans-about-the-dangers-of-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Data scientist remains at top of ‘most wanted’ lists</title>
		<link>https://www.aiuniverse.xyz/data-scientist-remains-at-top-of-most-wanted-lists/</link>
					<comments>https://www.aiuniverse.xyz/data-scientist-remains-at-top-of-most-wanted-lists/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 21 Jul 2017 07:34:47 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[chief technology officer]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Data scientist]]></category>
		<category><![CDATA[Machine Learning models]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=201</guid>

					<description><![CDATA[<p>Source &#8211; information-management.com Reports to: Depending on the organizational structure of the business in which you as a data scientist find yourself in, you may be reporting into a <a class="read-more-link" href="https://www.aiuniverse.xyz/data-scientist-remains-at-top-of-most-wanted-lists/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/data-scientist-remains-at-top-of-most-wanted-lists/">Data scientist remains at top of ‘most wanted’ lists</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> information-management.com</strong></p>
<p><b>Reports to:</b> Depending on the organizational structure of the business in which you as a data scientist find yourself in, you may be reporting into a lead data scientist, a principal, a chief technology officer, a chief data officer or in cases of some start up organizations, perhaps a CEO.</p>
<p>As with your responsibilities, the organizational structure around you will be dictated by the needs of that company, and there is no real universal pattern when talking about who data scientists would report into. You may work individually, as part of a team of other data scientists, or in many examples, as part of a wider software engineering team, again depending on the aim of that company or data science function.</p>
<div class="mod-enhancement e-full-width">
<figure class="image"><img decoding="async" src="https://assets.sourcemedia.com/dims4/default/ff75b8a/2147483647/resize/680x%3E/quality/90/?url=https%3A%2F%2Fassets.sourcemedia.com%2Fce%2F9a%2F36cd3f2c4015a33c18ad171e2421%2Ftech-worker-two.jpg" alt="" /><figcaption>
<h5></h5>
</figcaption></figure>
</div>
<p><b>Demand for this role:</b> Having been widely considered the “number one technology job” and “America’s most sexy occupation” for the past two years, the data scientist really is at the forefront of technology, and with over three times more advertised opportunities than qualified candidates, there quite simply is no other set of abilities in such higher demand.</p>
<p><b>Top industries hiring for this job:</b> Data science is revolutionizing every industry, from finance to healthcare, media to advertising, the start-up world to global corporates and everything in between, and it’s no surprise. The value added to any business that data science can bring is immeasurable, and it’s certainly an exciting area of technology to be involved with.</p>
<p><b>Responsibilities with this job:</b> A typical data scientist will be an amalgamation of the ability to build and engineer machine learning models while applying advanced mathematics or statistics. It is this combination of the engineering and statistics that separates the data scientist from a software engineer and a statistician respectively.</p>
<p><b>Required background for this job:</b> Within the data science space, there is often a specific set of requirements, both academic and technology-wise, that most data scientists will universally have and use on a day-to-day basis. Academia is key for this area, with most companies look for a minimum of a Master’s degree in a quantitative field, such as but not limited to computer science, physics, mathematics and statistics.</p>
<p>Many employers will only consider candidates with a PhD, however this trend in hiring is slowly fading out and the importance of a PhD is becoming less so. Regarding technologies, Python and R are far and above the most popular and in-demand technologies for top data scientists. Many organizations also look for strong C++ skills as part of a candidate’s portfolio, while exposure to big data technologies such as Hive, Hadoop and Spark are always a plus but not always necessary.</p>
<p><b>Skills requires for this job (technical, business and personal):</b> On the ‘softer’ side, successful data scientists need to be passionate and forward-thinking, and an interest in research is often a sticking point for a lot of businesses hiring for these types of candidates.</p>
<p>Data scientists should be always looking for new ways of approaching tasks or business issues and exploring emerging technologies. Many organizations will look for code examples, such as GitHub or StackOverflow profiles or publications, as well as an updated resume, so a strong online profile and recorded projects will add huge weight to any job applications when applying for positions in this space.</p>
<p><b>Compensation potential for this job:</b> Starting salaries for a fresh PhD or Master’s degree candidate can fetch $110,000 to $120,000 per year in New York City, and salaries of $200,000-plus are not unheard of for strong data scientists with anywhere between 5 to 8 years’ worth of experience.</p>
<p><b>Success in this role defined by:</b> Data science can be applied to multiple industries in a wide range of ways for different purposes, so a data scientist’s role or responsibilities will differ immensely depending on the industry. Take Investment banking, for example. A data scientist may be hired to build machine learning models to predict potential investment targets for large financial reward, while a data scientist in the pharmaceutical space may be tasked with predicating any new successful drug discoveries to fight disease, which in turn would be different to a data scientist predicting the success of a marketing camping working for an AdTech business.</p>
<p><b>Advancement opportunities for this job:</b> The potential of data science across every industry is unprecedented, and the role of candidates in this space can differ drastically and reap rewards in multiple different ways. The general role of a data scientist will, to some degree, be similar in each industry, i.e. building machine learning models for predictive analytics. However, the way in which that model is applied to each business will be hugely dependent on the industry and aim of the organization.</p>
<p>The post <a href="https://www.aiuniverse.xyz/data-scientist-remains-at-top-of-most-wanted-lists/">Data scientist remains at top of ‘most wanted’ lists</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/data-scientist-remains-at-top-of-most-wanted-lists/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>The hidden horse power driving Machine Learning models</title>
		<link>https://www.aiuniverse.xyz/the-hidden-horse-power-driving-machine-learning-models/</link>
					<comments>https://www.aiuniverse.xyz/the-hidden-horse-power-driving-machine-learning-models/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 18 Jul 2017 07:53:35 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[machine learning methods]]></category>
		<category><![CDATA[Machine Learning models]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=157</guid>

					<description><![CDATA[<p>Source &#8211; theregister.co.uk Machine Learning is becoming the only real available method to perform many modern computational tasks in near real time. Machine Vision, speech recognition and natural <a class="read-more-link" href="https://www.aiuniverse.xyz/the-hidden-horse-power-driving-machine-learning-models/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-hidden-horse-power-driving-machine-learning-models/">The hidden horse power driving Machine Learning models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>theregister.co.uk</strong></p>
<p>Machine Learning is becoming the only real available method to perform many modern computational tasks in near real time. Machine Vision, speech recognition and natural language processing have all proved difficult to crack without ML techniques.</p>
<p>When it comes to hardware, the tasks themselves do not need a great deal of computational power; but training the machine does – not to mention an awful lot of data. In the machine learning world, the more data you have, the more accurate your ML model can be. Of course the more data you have the longer the training process will take.</p>
<h3 class="crosshead">Teaching a machine to fish</h3>
<p>Take for example the Imagenet 2012 dataset, a set of test images that can be used for evaluating machine learning methods. This data set is a mere 138GB that I’m told will take one day to download to my machine. Once I have the data, it can be fed into the training phase of a machine learning algorithm &#8211; typically an iterative process where the model is trained with data, tested, the parameters tweaked and then the model is trained again.</p>
<p>Each iteration is termed an &#8220;epoch&#8221; in machine learning terms. For an idea of how long this can take, let’s to the Tensorflow program I built for learning movie recommendations. This will typically learn in 100 epochs fairly good recommendations for movies. Development was carried out on a dataset of 100,000 ratings, and this can realistically carried out on a MacBook Pro in less than an hour. Once you move up to the full 24 million ratings, however, the training time per epoch moves to more than 140 seconds, which will run for around nine days on the MacBook Pro. This really isn’t viable.</p>
<h3 class="crosshead">There&#8217;s a, er, cloud on the horizon&#8230;</h3>
<p>Something needs to be done. Maybe we could move this problem into the cloud and let the big boys with their big machines take over. The problem is moving your data into the cloud. For universities and the likes of Google, this isn’t really a problem, providing you’ve got access to end-to-end fast networks. Universities in Britain are all connected over the Janet network, whose backbone runs at 100Gbps, more than enough to shift large datasets around. Google, of course, has its own dark net, but what if we want to move data out of our walled garden and onto a public cloud ML system?</p>
<p>This was just the problem we faced a few years back at Dundee University when trying to use Microsoft’s Azure to process Mass Spectrometer data. These files were fairly big &#8211; a few gigabytes in size &#8211; but we were hoping to process lots of them in near real time. Sadly, we just couldn’t get the data into the cloud fast enough.</p>
<blockquote class="centredquote"><p>Companies are starting to offer hardware that can be situated close to the data production (in terms of network speed) for machine learning. These appliances employ GPU to speed up the math needed of machine learning.</p></blockquote>
<p>This is why Amazon released Snowball, essentially a box of hard disks delivered to your door and that you fill with data (at 10Gbps) and return to be loaded into AWS. It is remarkably cheap &#8211; around $200 per job &#8211; with 10 days to fill the disks with your data, but it is not real-time.</p>
<h3 class="crosshead">Hardware</h3>
<p>It is for this reason that companies are starting to offer hardware that can be situated close to the data production (in terms of network speed) for machine learning. These appliances employ GPUs to speed up the maths needed for machine learning. Essentially the mathematics performed to produce the fast action in a video game is similar to that done in machine learning (and other fields); you just want to do lots of sums in parallel on fairly small chucks of data.</p>
<p>There are a number of manufacturers of GPU cards – Nvidia, AMD and Intel’s Xeon – but it is only really Nvidia that has grasped the ML nettle and made it easy to do deep learning on its cards.</p>
<p>So what sort of speed-up can you get? I used an Nvidia Titan X and set it up to run the Film recommendation engine with the large dataset. The training per epoch fell from 140 seconds to around 11, making training this large dataset a realistic proposition. A Titan GPU card will set you back around £900, plus the cost of the machine to run it in. You will still have the problem of getting the data onto the machine, and find way to stream the data through the card efficiently.</p>
<p>So if one GPU is good, multiple GPUs must be better. Nvidia certainly thinks so, and it is pushing the DGX-1 for Deep Learning.</p>
<p>The DGX-1 is the only machine of its type around at the moment. Sure, you can build your own machine with five GPU cards, but you still want to get close to the performance, go to the DGX-1 due to its custom bus features allowing date to be transferred to the GPU cards at impressive speeds.</p>
<p>The DGX-1 I got a shot at was lent to the University of Dundee, where I’m a senior lecturer, for testing by our machine vision group.</p>
<p>It’s an impressive looking machine &#8211; and a very loud one, once all its GPU cores are up and running. To get an idea of its speed, a researcher loaded up the Imagenet 2012 dataset and trained a Resnet50 machine learning model on the dataset. With one GPU the machine could process 210 images a second, with 2GPUs that was 404 and with 5 GPUs that was 934 images per second.</p>
<h3 class="crosshead">All plain sailing? Not quite</h3>
<p>This kind of speed comes at a price, of course, and the machine isn’t cheap &#8211; expect to pay more than £100,000 for a system. Second, for best performance you’ll need to use Nvidia’s customised version of Google’s Tensorflow to get that sort of performance, which could mean a delay in getting at the latest features from the original open source well of Tensorflow code. In addition, trying to load the images from the dataloader will cause severe CPU bottlenecks; the data set needs to be optimised – a task that, in itself, can take several hours.</p>
<p>There’s one other problem that could deter some: every machine learning job in the DGX-1 needs to be packaged into a Docker container &#8211; not a problem for devops engineers, but the learning curve for researchers can be off-putting to those who aren’t. If we return to the movie recommendation engine, I was hoping to run this on the DGX-1 to get an idea of how that could be sped up. Sadly it’s not that simple, I’ll admit, making this code run in parallel was beyond me in the short time I had on the system. Others, too, have reported problems in taking code and making it run across multiple GPUs, with some reporting no increase in speed either.</p>
<h3 class="crosshead">Power at a price</h3>
<p>The DGX-1 is an impressive machine delivering a lot of raw processing power, but at its price, it won’t be for everyone. Certainly, you can get a lot of poke for your money using just a single GPU card and that might be enough to get your ML AI up and running, providing you’ve got the time to wait for the training to be complete. On the upside though, you won’t need one of these machines to run the model once it’s been trained, you might only need your mobile phone and a copy of Tensorflow for Mobile or Apple’s Core ML.</p>
<p>The market for ML appliances is in its infancy: impressive machines, lots of raw power but at a prohibitive price. You probably won’t even need one of these machines once your model has been trained, so to really get your money’s worth, you need to build and test a lot of models at present.</p>
<p>If – and it’s a big “if” &#8211; OEMs get on board, I’d expect a greater number of machines to show up using multiple GPU cards optimized for ML libraries. What will happen is difficult to predict; over in the world of relational databases the market for powerful, hardware-and-software optimized appliances, while attractive, never really broke out into the mass market.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-hidden-horse-power-driving-machine-learning-models/">The hidden horse power driving Machine Learning models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-hidden-horse-power-driving-machine-learning-models/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
	</channel>
</rss>
