<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google Cloud AutoML Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/category/google-cloud-automl/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/category/google-cloud-automl/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 17 Apr 2020 10:07:28 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Object Stores Starting to Look Like Databases</title>
		<link>https://www.aiuniverse.xyz/object-stores-starting-to-look-like-databases/</link>
					<comments>https://www.aiuniverse.xyz/object-stores-starting-to-look-like-databases/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Apr 2020 10:06:32 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[data analysts]]></category>
		<category><![CDATA[Databases]]></category>
		<category><![CDATA[Google Cloud]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8239</guid>

					<description><![CDATA[<p>Source: Don’t look now, but object stores – those vast repositories of data sitting behind an S3 API – are beginning to resemble databases. They’re obviously still <a class="read-more-link" href="https://www.aiuniverse.xyz/object-stores-starting-to-look-like-databases/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/object-stores-starting-to-look-like-databases/">Object Stores Starting to Look Like Databases</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: </p>



<p>Don’t look now, but object stores – those vast repositories of data sitting behind an S3 API – are beginning to resemble databases. They’re obviously still separate categories today, but as the next-generation data architecture takes shape to solve emerging real-time data processing and machine learning challenges, the lines separating things like object stores, databases, and streaming data frameworks will begin to blur.</p>



<p>Object stores have become the primary repository for the vast amounts of less-structured data that’s generated today. Organizations clearly are using object-based data lakes in the cloud and on premise to store unstructured data, like images and video. But they’re also using them to store many of the other types of data, like sensor and log data from mobile and IoT devices, that the world is generating.</p>



<p>The object store is becoming a general purpose data repository, and along the way it’s getting closer to the most popular data workloads, including SQL-based analytics and machine learning. The folks at object storage software vendor Cloudian are moving their wares in that direction too, according to Cloudian CTO Gary Ogasawara.</p>



<p>“We’re moving more and more to that,” Ogasawara tells Datanami. “If you can combine the best of both worlds – have the huge capacity of an object store and the advanced query capability of an SQL-type database – that would be the ideal. That’s what people are really asking for.”</p>



<h3 class="wp-block-heading">Past Is Prologue</h3>



<p>We’ve seen this film before. When Apache Hadoop was the hot storage repository for big data (really, less-structured data), the first big community efforts was to develop a relational database for it. That way, data analysts with existing SQL skills – as well as BI applications expecting SQL data – would be able to leverage it without extensive retraining. And besides, after running less-structured data through MapReduce jobs, you  needed a place to put the structured data. A database is that logical place.</p>



<p>This led to the creation of Apache Hive out of Facebook, and the community followed with a host of other SQL-on-Hadoop engines (or relational databases, if you like), including Apache Impala, Presto, and Spark SQL, among others. Of course, Hadoop’s momentum fizzled over the past few years, in part due to the rise of S3 from Amazon Web Services and other cloud-based object storage systems, notably Azure BLOB Storage from Microsoft and Google Cloud Storage, which are universally more user-friendly than Hadoop, if not always cheaper.</p>



<p>In the cloud, users are presented with a wide range of specialty storage repositories and processing engines for SQL and machine learning. On the SQL front, you have Amazon RedShift, Azure Data Warehouse, and Google BigQuery. On top of these “native” offerings, the big data community has adapted many existing and popular analytics databases, including Teradata, Vertica, and others, to work with S3 and other object stores with an S3-compatible API.</p>



<p>The same goes for machine learning workloads. Once the data is in S3 (or Blob Store or Google Cloud Storage), it’s a relatively simple manner to use that data to build and train machine learning models in SageMaker, Azure Machine Learning, or Google Cloud AutoML. With the rise of the cloud, every member of the big data and machine learning community has moved to support the cloud, and with it object storage systems.</p>



<p>As the cloud’s momentum grows, S3 has become the defacto data access standard for the next generation of applications, from SQL analytics and machine learning to more traditional apps too. For many new applications, data is simply expected to be stored in an object storage system, and developers expect to be able to access that data over the S3 API.</p>



<h3 class="wp-block-heading">A Hybrid Architecture</h3>



<p>But of course, not all new applications will live on the cloud with ready access to petabytes of data and gigaflops of computing power. In fact, with the rise of 5G networks and the explosion of smart devices on the Internet of Things (IoT), the physical world is the next frontier for computing, and that’s changing the dynamics for data architects who are trying to foresee new trends.</p>



<p>At Cloudian, Ogasawara and his team are working on adapting its HyperStore object storage architecture to fit into the emerging edge-and-hub computing model. One of the examples he uses is the case of an autonomous car. With cameras, LIDAR, and other sensors, each self-driving car generates terabytes worth of data every day, and petabytes per year.</p>



<p>“That is all being generated at the edge,” he says. “Even with a 5G network, you will never be able to transmit all that data to somewhere else for analyses. You have to push that storage and processing as close to the edge as possible.”</p>



<p>Cloudian is currently working on developing a version of HyperStore that sits on the edge. In the self-driving car example, the local version of HyperStore would run right on the car and assist with storing and processing data coming off the sensors in real time. This computing environment would constitute a fast “inner loop,” Ogasawara says.</p>



<p>“But then you have a slower outer loop that’s also collecting data, and that includes the hub where the large, vast data lake resides in object storage,” he continues. “Here you can do more extensively training of ML models, for example, and then push that kind of metadata out to the edge, where it’s essentially a compiled version of your model that can be used very quickly.”</p>



<p>In the old days, object stores resembled relatively simple (and nearly infinitely scalable) key-value stores. But to support future use cases — like self-driving cars as well as weather modeling and genomics — the object store needs to learn new tricks, like how to stream data in and intelligently filter it so that only a subset of the most important data is forwarded from the edge to the hub.</p>



<p>To that end, Cloudian is working on a new project that will incorporate analytics capabilities. It has a working name of the Hyperstore Analytics Platform, the project would incorporate frameworks like Spark or TensorFlow to assist with the intelligent streaming and processing of data. A beta was expected by the end of the year (at least that was the timeline that Ogasawara shared in early March before the COVID-19 lockdown.)</p>



<h3 class="wp-block-heading">Object’s Evolution</h3>



<p>Cloudian is not the only object storage vendor looking at how to evolve its product to adapt to emerging data challenges. In fact, its not just object storage vendors who are trying to tackle the probolem.</p>



<p>The folks at Confluent have adapted their Kafka-based stream processing technologies (which excel at processing event data) to work more like a database, which is good at managing stateful data. MinIO has SQL extensions that allow its object store to function like a database. NewSQL database vendor MemSQL has long had hooks for Kafka that allow it to process large amounts of real-time data. The in-memory data grid (IMDG) vendors are doing similar things for processing new event data within the context of historic, stateful data. And let’s not even get into how the event meshes are solving this problem.</p>



<p>According to Ogasawara, adapting Cloudian’s HyperStore offering is a logical way to tackle today’s emerging data challenges. “You’ve done very well at building this storage infrastructure,” he says. “Now, how do you make the data usable and consumable? It’s really about providing better access APIs to get to that data, and almost making the object storage more intelligent.”</p>



<p>Object stores are moving beyond their initial use case, which was reading, writing, and deleting data at massive scale. Now customers are pushing object storage vendors to support more advanced workflows, including complex machine learning workflows. That will most likely require an extension to the S3 API (something that Cloudian has brought up with AWS, but without much success).</p>



<p>“How do you look into those objects? Those types of APIs need more and more [capabilities],” Ogasawara says. “And even letting AI or machine learning-type workflows, doing things like a sequence of operations — those types of language constructs, everyone is starting to look at and trying to figure out how do we make it easier for users and customers to make that data analysis possible.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/object-stores-starting-to-look-like-databases/">Object Stores Starting to Look Like Databases</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/object-stores-starting-to-look-like-databases/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Digital transformation: 6 ways to democratize data skills</title>
		<link>https://www.aiuniverse.xyz/digital-transformation-6-ways-to-democratize-data-skills/</link>
					<comments>https://www.aiuniverse.xyz/digital-transformation-6-ways-to-democratize-data-skills/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 15 Apr 2020 12:35:10 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[data skills]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[Google Cloud]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8189</guid>

					<description><![CDATA[<p>Source: enterprisersproject.com Digital transformation and analytics are nearly inseparable. “At the core of any successful digital transformation is the ability to leverage the company’s data assets to drive <a class="read-more-link" href="https://www.aiuniverse.xyz/digital-transformation-6-ways-to-democratize-data-skills/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/digital-transformation-6-ways-to-democratize-data-skills/">Digital transformation: 6 ways to democratize data skills</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterprisersproject.com</p>



<p>Digital transformation and analytics are nearly inseparable. “At the core of any successful digital transformation is the ability to leverage the company’s data assets to drive superior customer experiences, products and services as well as operating model efficiencies,” says Scott Snyder, a Digital and Innovation Partner with Heidrick &amp; Struggles, and co-author of “Goliath’s Revenge: How Established Companies Turn the Tables on Digital Disruptors.” </p>



<p>Companies typically need data science know-how&nbsp;in order to connect data to analytics or algorithms and deliver digital insight. “Without a critical mass of these data science and analytics skills, companies will struggle to keep up with both customer expectations and new innovation opportunities,” Snyder says.</p>



<p>The gap between supply of and demand for data sciences skills is a problem IT leaders know well. One the one hand, data is growing at an exponential rate. “It’s widely reported that 90 percent of the world’s data has been generated in the last two years, and with data doubling every 1.2 years on average versus processing speed only doubling every one to 1.5 years, companies must become more efficient at analyzing data to keep up,” says Snyder.</p>
<p>The post <a href="https://www.aiuniverse.xyz/digital-transformation-6-ways-to-democratize-data-skills/">Digital transformation: 6 ways to democratize data skills</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/digital-transformation-6-ways-to-democratize-data-skills/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Announces Cloud AI Platform Pipelines to Simplify Machine Learning Development</title>
		<link>https://www.aiuniverse.xyz/google-announces-cloud-ai-platform-pipelines-to-simplify-machine-learning-development/</link>
					<comments>https://www.aiuniverse.xyz/google-announces-cloud-ai-platform-pipelines-to-simplify-machine-learning-development/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 30 Mar 2020 07:50:04 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[AI Platform]]></category>
		<category><![CDATA[cloud AI]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7820</guid>

					<description><![CDATA[<p>Source: infoq.com In a recent blog post, Google announced the beta of Cloud AI Platform Pipelines, which provides users with a way to deploy robust, repeatable machine learning pipelines along <a class="read-more-link" href="https://www.aiuniverse.xyz/google-announces-cloud-ai-platform-pipelines-to-simplify-machine-learning-development/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-announces-cloud-ai-platform-pipelines-to-simplify-machine-learning-development/">Google Announces Cloud AI Platform Pipelines to Simplify Machine Learning Development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>In a recent blog post, Google announced the beta of Cloud AI Platform Pipelines, which provides users with a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility. </p>



<p>With Cloud AI Pipelines, Google can help organizations adopt the practice of Machine Learning Operations, also known as MLOps – a term for applying DevOps practices to help users automate, manage, and audit ML workflows. Typically, these practices involve data preparation and analysis, training, evaluation, deployment, and more. </p>



<p>Google product manager Anusha Ramesh and staff developer advocate Amy Unruh wrote in the blog post: </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>When you&#8217;re just prototyping a machine learning (ML) model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make an ML workflow sustainable and scalable, things become more complex.</p></blockquote>



<p>Moreover, when complexity grows, building a repeatable and auditable process becomes more laborious.</p>



<p>Cloud AI Platform Pipelines &#8211; which runs on a Google Kubernetes Engine (GKE) Cluster and is accessible via the Cloud AI Platform dashboard – has two major parts: </p>



<ul class="wp-block-list"><li>The infrastructure for deploying and running structured AI workflows integrated with GCP services such as BigQuery, Dataflow, AI Platform Training and Serving, Cloud Functions, and</li><li>The pipeline tools for building, debugging and sharing pipelines and components.</li></ul>



<p>With the Cloud AI Platform Pipelines users can specify a pipeline using either the Kubeflow Pipelines (KFP) software development kit (SDK) or by customizing the TensorFlow Extended (TFX) Pipeline template with the TFX SDK. The latter currently consists of libraries, components, and some binaries and it is up to the developer to pick the right level of abstraction for the task at hand. Furthermore, TFX SDK includes a library ML Metadata (MLMD) for recording and retrieving metadata associated with the workflows; this library can also run independently. </p>



<p>Google recommends using KPF SDK for fully custom pipelines or pipelines that use prebuilt KFP components, and TFX SDK and its templates for E2E ML Pipelines based on TensorFlow. Note that over time, Google stated in the blog post that&nbsp;these two SDK experiences would merge. The SDK, in the end, will compile the pipeline&nbsp;and submit&nbsp;it to the Pipelines REST API; the AI Pipelines REST API server stores and schedules the pipeline for execution.</p>



<p>An open-source container-native workflow engine for orchestrating parallel jobs on Kubernetes called Argo runs the pipelines, which includes additional microservices to record metadata, handle components IO, and schedule pipeline runs. The Argo workflow engine executes each pipeline on individual isolated pods in a GKE cluster – allowing each pipeline component to leverage Google Cloud services such as Dataflow, AI Platform Training and Prediction, BigQuery, and others. Furthermore, pipelines can contain steps that perform sizeable GPU and TPU computation in the cluster, directly leveraging features like autoscaling and node auto-provisioning.</p>



<p>AI Platform Pipeline runs include automatic metadata tracking using the&nbsp;MLMD &#8211;&nbsp;and&nbsp;logs the artifacts used in each pipeline step, pipeline parameters, and the linkage across the input/output artifacts, as well as the pipeline steps that created and consumed them.</p>



<p>With Cloud AI Platform Pipelines, according to the blog post customers will get:</p>



<ul class="wp-block-list"><li>Push-button installation via the Google Cloud Console</li><li>Enterprise features for running ML workloads, including pipeline versioning, automatic metadata tracking of artifacts and executions, Cloud Logging, visualization tools, and more </li><li>Seamless integration with Google Cloud managed services like BigQuery, Dataflow, AI Platform Training and Serving, Cloud Functions, and many others </li><li>Many prebuilt pipeline components (pipeline steps) for ML workflows, with easy construction of your own custom components</li></ul>



<p>The support for Kubeflow will allow a straightforward migration to other cloud platforms, as a respondent on a Hacker News thread on Google AI Cloud Pipeline stated:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>Cloud AI Platform Pipelines appear to use Kubeflow Pipelines on the backend, which is open-source and runs on Kubernetes. The Kubeflow team has invested a lot of time on making it simple to deploy across a variety of public clouds, such as AWS, and Azure. If Google were to kill it, you could easily run it on any other hosted Kubernetes service.</p></blockquote>



<p>The release of AI Cloud Pipelines shows Google&#8217;s further expansion of Machine Learning as a Service (MLaaS) portfolio &#8211; consisting of several other ML centric services such as Cloud AutoML, Kubeflow and AI Platform Prediction. The expansion is necessary to allow Google to further capitalize on the growing demand for ML-based cloud services in a market which analysts expect to reach USD 8.48 billion by 2025, and to compete with other large public cloud vendors such as Amazon offering similar services like SageMaker and Microsoft with Azure Machine Learning.</p>



<p>Currently, Google plans to add more features for AI Cloud Pipelines. These features are:</p>



<ul class="wp-block-list"><li>Easy cluster upgrades&nbsp;</li><li>More templates for authoring ML workflows</li><li>More straightforward UI-based setup of off-cluster storage of backend data</li><li>Workload identity, to support transparent access to GCP services, and&nbsp;</li><li>Multi-user isolation – allowing each person accessing the Pipelines cluster to control who can access their pipelines and other resources.</li></ul>



<p>Lastly, more information on Google&#8217;s Cloud AI Pipeline is available in the getting started documentation.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-announces-cloud-ai-platform-pipelines-to-simplify-machine-learning-development/">Google Announces Cloud AI Platform Pipelines to Simplify Machine Learning Development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-announces-cloud-ai-platform-pipelines-to-simplify-machine-learning-development/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>FOUR ESSENTIAL STRATEGIES TO AVOID HPC CLOUD LOCK-IN</title>
		<link>https://www.aiuniverse.xyz/four-essential-strategies-to-avoid-hpc-cloud-lock-in/</link>
					<comments>https://www.aiuniverse.xyz/four-essential-strategies-to-avoid-hpc-cloud-lock-in/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 24 Mar 2020 07:54:24 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[HPC CLOUD]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7680</guid>

					<description><![CDATA[<p>Source: nextplatform.com (Sponsored Content) HPC workloads are rapidly moving to the cloud. Market sizing from HPC analyst firm Hyperion Research shows a dramatic 60 percent rise in cloud spending from <a class="read-more-link" href="https://www.aiuniverse.xyz/four-essential-strategies-to-avoid-hpc-cloud-lock-in/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/four-essential-strategies-to-avoid-hpc-cloud-lock-in/">FOUR ESSENTIAL STRATEGIES TO AVOID HPC CLOUD LOCK-IN</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: nextplatform.com</p>



<p>(Sponsored Content) HPC workloads are rapidly moving to the cloud. Market sizing from HPC analyst firm Hyperion Research shows a dramatic 60 percent rise in cloud spending from just under $2.5 billion in 2018 to approximately $4 billion in 2019 and projects HPC cloud revenue will reach $7.4 billion in 2023, a 24.6 percent compound annual growth rate .</p>



<p>While leading cloud providers offer similar services and fee structures, the risk of lock-in is real. HPC users fear losing control over their fastest-growing infrastructure budget line item. A few simple strategies can help organizations stay nimble and avoid cloud lock-in.</p>



<p>Use containers along with custom machine instances. To provide portability between on-premises environments and clouds, users commonly create VMs that encapsulate HPC applications. Standards such as VMware’s VMDK, Microsoft’s VHD, and the Open Virtualization Format (OVF) have made it easy to package and ship VMs to your favorite cloud where they can be imported as managed images such as AMIs. While this is a big improvement over installing software directly on cloud instances, VMs can be unwieldy, and procedures for importing images vary by cloud provider. An alternative solution is to create a smaller set of virtualized base images containing essential components like the OS, libraries, and a preferred container runtime such as Singularity or Docker. Application-specific containers can then be pulled from a container registry allowing the same machine image to be used for multiple applications. This will help you stay portable across clouds and on-premise, deploy environments faster, and significantly reduce the work involved in preparing and maintaining machine images.</p>



<p>Stay as “down stack” as possible. While most HPC users tend to consume cloud services at the IaaS level, cloud providers offer increasingly impressive PaaS and SaaS offerings. For example, a customer deploying a machine learning environment may be tempted to turn to offerings such as Amazon SageMaker, Microsoft Azure Machine Learning Studio or Google Cloud AutoML. HPC users may be similarly tempted to look to cloud-specific batch services, elastic file systems, native container services, or functions. While these cloud services are capable and convenient, there is a price to be paid in terms of portability. Users can easily find themselves locked into cloud-specific software ecosystems. Also, when PaaS or SaaS offerings are deployed, each service generally consumes separate IaaS infrastructure, so even value-added services that are “free” (meaning that users pay only for infrastructure) tend to drive up costs. An alternative approach is to deploy containerized services on IaaS offerings where each instance can run multiple software components. This will take a little more effort, but it will help you move applications more easily between on-prem and cloud environments, reduce costs, and ensure that you can repatriate workloads back in-house should the need arise.</p>



<p>Beware of data gravity. In HPC applications, data is the elephant in the room. HPC operators constantly struggle with whether it is better to “bring the compute to the data” or “bring the data to the compute.”  The issues are complex and depend on factors such as where the data originates, costs and time required for transmission, short-term and long-term storage costs, and the type and level of access required (file, object, archival, etc.). If you’re storing significant amounts of data in the cloud, keep data egress costs in mind, and look for solutions that can help automate data movement. You’ll want to be able to automatically tear-down storage services that are no longer needed, migrate data to lower-cost storage tiers, or automatically retrieve data to on-prem storage – especially in hybrid or multi-cloud environments.</p>



<p>Have a fall-back plan. Cloud computing offers many advantages for HPC users – instant access to state-of-the-art infrastructure, the ability to burst capacity as needed, and ability to scale capacity rapidly for faster results on a variable cost infrastructure. These capabilities come at a price, however. Seasoned HPC professionals routinely tell us that running HPC applications in the cloud on a sustained basis can be a number of times more expensive than operating on-premise HPC clusters – especially when cloud infrastructure isn’t well-managed. This is why many HPC users tend to run hybrid clouds or deploy workloads to the cloud selectively. As you build bridges to one or more clouds, make sure that the bridge is bi-directional. Being able to fall back if necessary and run workloads in-house when capacity is available is perhaps even more important than staying portable across clouds.</p>



<p>Regardless of how you deploy high performance applications today, chances are good that more cloud computing is in your future. Following the four strategies above can help organizations ensure a smoother transition to the cloud, maintain flexibility, and avoid the risk of lock-in and cost surprises.</p>
<p>The post <a href="https://www.aiuniverse.xyz/four-essential-strategies-to-avoid-hpc-cloud-lock-in/">FOUR ESSENTIAL STRATEGIES TO AVOID HPC CLOUD LOCK-IN</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/four-essential-strategies-to-avoid-hpc-cloud-lock-in/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google launches Cloud AI Platform Pipelines in beta to simplify machine learning development</title>
		<link>https://www.aiuniverse.xyz/google-launches-cloud-ai-platform-pipelines-in-beta-to-simplify-machine-learning-development-2/</link>
					<comments>https://www.aiuniverse.xyz/google-launches-cloud-ai-platform-pipelines-in-beta-to-simplify-machine-learning-development-2/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 13 Mar 2020 09:32:25 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[cloud AI]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[platform]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7410</guid>

					<description><![CDATA[<p>Source: venturebeat.com Google today announced the beta launch of Cloud AI Platform Pipelines, a service designed to deploy robust, repeatable AI pipelines along with monitoring, auditing, version tracking, <a class="read-more-link" href="https://www.aiuniverse.xyz/google-launches-cloud-ai-platform-pipelines-in-beta-to-simplify-machine-learning-development-2/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-cloud-ai-platform-pipelines-in-beta-to-simplify-machine-learning-development-2/">Google launches Cloud AI Platform Pipelines in beta to simplify machine learning development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Google today announced the beta launch of Cloud AI Platform Pipelines, a service designed to deploy robust, repeatable AI pipelines along with monitoring, auditing, version tracking, and reproducibility in the cloud. Google’s pitching it as a way to deliver an “easy to install” secure execution environment for machine learning workflows, which could reduce the amount of time enterprises spend bringing products to production.</p>



<p>“When you’re just prototyping a machine learning model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make a [machine learning] workflow sustainable and scalable, things become more complex,” wrote Google product manager Anusha Ramesh and staff developer advocate Amy Unruh in a blog post. “A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis, to training, to evaluation, to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner — for example, in a set of notebooks or scripts — and things like auditing and reproducibility become increasingly problematic.”</p>



<p> AI Platform Pipelines has two major parts: (1) the infrastructure for deploying and running structured AI workflows that are integrated with Google Cloud Platform services and (2) the pipeline tools for building, debugging, and sharing pipelines and components. The service runs on a Google Kubernetes cluster that’s automatically created as a part of the installation process, and it’s accessible via the Cloud AI Platform dashboard. With AI Platform Pipelines, developers specify a pipeline using the Kubeflow Pipelines software development kit (SDK), or by customizing the TensorFlow Extended (TFX) Pipeline template with the TFX SDK. This SDK compiles the pipeline and submits it to the Pipelines REST API server, which stores and schedules the pipeline for execution.</p>



<p>AI Pipelines uses the open source Argo workflow engine to run the pipeline and has additional microservices to record metadata, handle components IO, and schedule pipeline runs. Pipeline steps are executed as individual isolated pods in a cluster and each component can leverage Google Cloud services such as Dataflow, AI Platform Training and Prediction, BigQuery, and others. Meanwhile, the pipelines can contain steps that perform graphics card and tensor processing unit computation in the cluster, directly leveraging features like autoscaling and node auto-provisioning.</p>



<p>AI Platform Pipeline runs include automatic metadata tracking using ML Metadata, a library for recording and retrieving metadata associated with machine learning developer and data scientist workflows. Automatic metadata tracking logs the artifacts used in each pipeline step, pipeline parameters, and the linkage across the input/output artifacts, as well as the pipeline steps that created and consumed them.</p>



<p>In addition, AI Platform Pipelines supports pipeline versioning, which allows developers to upload multiple versions of the same pipeline and group them in the UI, as well as automatic artifact and lineage tracking. Native artifact tracking enables the tracking of things like models, data statistics, model evaluation metrics, and many more. And lineage tracking shows the history and versions of your models, data, and more.</p>



<p>Google says that in the near future, AI Platform Pipelines will gain multi-user isolation, which will let each person accessing the Pipelines cluster control who can access their pipelines and other resources. Other forthcoming features include workload identity to support transparent access to Google Cloud Services; a UI-based setup of off-cluster storage of backend data, including metadata, server data, job history, and metrics; simpler cluster upgrades; and more templates for authoring workflows.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-cloud-ai-platform-pipelines-in-beta-to-simplify-machine-learning-development-2/">Google launches Cloud AI Platform Pipelines in beta to simplify machine learning development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-launches-cloud-ai-platform-pipelines-in-beta-to-simplify-machine-learning-development-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google partner Servian builds AI tool for Fox Sports</title>
		<link>https://www.aiuniverse.xyz/google-partner-servian-builds-ai-tool-for-fox-sports/</link>
					<comments>https://www.aiuniverse.xyz/google-partner-servian-builds-ai-tool-for-fox-sports/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 20 Jan 2020 11:00:04 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[Google Cloud]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML]]></category>
		<category><![CDATA[Services]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6247</guid>

					<description><![CDATA[<p>Source: crn.com.au Google partner Servian has revealed that it was behind “Monty”, an AI tool Fox Sports used to predict when wickets would fall during cricket matches, <a class="read-more-link" href="https://www.aiuniverse.xyz/google-partner-servian-builds-ai-tool-for-fox-sports/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-partner-servian-builds-ai-tool-for-fox-sports/">Google partner Servian builds AI tool for Fox Sports</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: crn.com.au</p>



<p>Google partner Servian has revealed that it was behind “Monty”, an AI tool Fox Sports used to predict when wickets would fall during cricket matches, and has already delivered an upgrade to the tool.</p>



<p>Monty was developed in 2019 and works by using historical data from previous matches as well as live data, such as player behaviour and pitch conditions, to determine the chance of a wicket. Predictions made by the tool were used live during Fox Sports’ coverage of cricket and proved accurate: Monty was live but not broadcasting for an Australia vs. Pakistan match in November 2019 and successfully predicted the first wicket of the game – although the technicality of a no-ball meant the batter was not out. Monty then predicted the actual first wicket in the same innings.</p>



<p>Monty is important because Fox Sports does not have exclusive rights to Test cricket, as the matches are also available on free-to-air television. Fox therefore needs to find ways to engage fans. Monty helps Fox Sports to create that reason to sign up for its coverage over alternatives – and also makes the broadcaster’s apps more attractive.</p>



<p>The bot is built on Google Cloud and took just four weeks to create, with Servian working alongside Google and Fox Sports.</p>



<p>Servian’s upgrade, said Google Practice Lead Andrew Pym, “managed to expand the number of machine learning features from 65 to 86, which allowed for more analysis and higher accuracy. The most significant addition was the bowler and batsman combinations,”</p>



<p>“We continued the use of Google Cloud’s serverless tools including BigQuery and AutoML Tables, but added an event-driven approach to the serverless infrastructure,” Pym added. “This supported the rapid development of multiple models for multiple games, which will be important for iterating and expanding Monty moving forward.”</p>



<p>Future upgrades will be able to support the development of advanced features that describe the many nuances of cricket. Refreshes will also take into account new players.</p>



<p>Servian is chuffed to have delivered Monty, and of the fact it is one of a small group of Australian partners to have attained the Google Cloud Machine Learning Partner Specialisation.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-partner-servian-builds-ai-tool-for-fox-sports/">Google partner Servian builds AI tool for Fox Sports</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-partner-servian-builds-ai-tool-for-fox-sports/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI year in review: Opportunities grow, but ethics loom large</title>
		<link>https://www.aiuniverse.xyz/ai-year-in-review-opportunities-grow-but-ethics-loom-large/</link>
					<comments>https://www.aiuniverse.xyz/ai-year-in-review-opportunities-grow-but-ethics-loom-large/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 26 Dec 2019 07:38:37 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[AI year in review: Opportunities grow]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[but ethics loom large]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5814</guid>

					<description><![CDATA[<p>Source: venturebeat.com Artificial intelligence garnered a lot of attention from the usual players — governments, tech giants, and academics — throughout 2019. But it was also a <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-year-in-review-opportunities-grow-but-ethics-loom-large/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-year-in-review-opportunities-grow-but-ethics-loom-large/">AI year in review: Opportunities grow, but ethics loom large</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Artificial intelligence garnered a lot of attention from the usual players — governments, tech giants, and academics — throughout 2019. But it was also a big year for business AI, with even more growth expected ahead. In a March KPMG survey, more than half of business executives said their company would implement enterprise-scale AI within two years. That is partly what drives PwC estimates that AI will deliver $15.7 trillion to the global economy by 2030.</p>



<p>Impressive leaps in 2019 have enabled new business applications for virtual assistants, such as Salesforce’s updated Einstein Voice Assistant for sales and customer service apps and IBM’s intelligent agent Watson Assistant. Meanwhile, governmental deployment of AI around the world has led to abuses and concomitant regulation. But along with concerns about power in AI comes the technology’s potential to help make everyday life a little better.</p>



<h2 class="wp-block-heading">Automated AI for the enterprise</h2>



<p>The enterprise cloud market heated up with increased implementation of automated machine learning (AutoML) that allows customers to apply AI to use cases such as marketing, customer service, and risk management. The biggest players in cloud computing — Google, Microsoft, and Amazon — spotlighted AI tools and automation in their annual tech showcases. Microsoft summed it up at Ignite 2019 with its tagline for Azure Cognitive Search: “Use AI to solve business problems.”</p>



<p>At the Google Cloud Next conference in April, Google announced new AutoML classes, premade Retail and Contact Center AI services, and the collaborative model-making tool AI Platform. In December, Amazon launched a blizzard of AI-powered enterprise tools at its re:Invent 2019 conference.</p>



<p>One of the more intriguing tools from Google Cloud Next was AutoML Natural Language, released widely in December, which analyzes text from a range of document and formatting types to feed sentiment analysis, legal document parsing, and publications management. Amazon rolled out a similar tool for AWS, called Textract, in April. Microsoft, meanwhile, pumped up the subject matter virtual agents, sentiment analysis, and business process automation available on its business-focused Power Platform.</p>



<h2 class="wp-block-heading">AI at the edge</h2>



<p>At the other end of the network — the edge — software advances like federated and multimodal learning are enabling artificial intelligence on smartphones and other devices, with the promise of greater control and better privacy protections compared to AI processed in the cloud. In June, Apple introduced Core ML 3, which allows iOS devices to perform machine learning for the first time. Google incorporated federated learning into its TensorFlow development environment back in 2017, and the effort is bearing fruit: In October, Google promoted the many AI touches on the Pixel 4 smartphone, from speech recognition to greatly improved camera features.</p>



<p>Hardware is also becoming more efficient, with “real AI” powered by mobile chips. Examples abound: Arm is building up its product line to power machine learning and AI in a wide selection of devices. Intel promoted Keem Bay, a vision processing unit that brings inferencing tasks to edge devices. Google offered Coral AI, a range of boards and kits for neural network machine learning that work on the edge. And Nvidia released the Jetson Xavier NX to power AI for drones, cars, and other mobile edge devices.</p>



<p>In addition, a new focus on power efficiency could help reduce the environmental (and financial) impact of running all those AI systems. Google created a controller that keeps its experimental quantum processor cool enough to function while using just 2 milliwatts of power. On the consumer side, Facebook announced DeepFovea, an AI technique that improves the power draw of VR headsets. And even closer to home, Sense released a line of AI devices to monitor and reduce household energy use, while Evolve Energy’s AI helps solar and wind power customers find the best prices and save energy.</p>



<h2 class="wp-block-heading">Shipping and shopping</h2>



<p>Besides consumer energy monitors like the above, 2019 saw huge advances in areas like autonomous cars and the internet of things (IoT). AI also made inroads into such everyday tasks as grocery shopping.</p>



<p>Self-driving cars from the likes of Uber, Lyft, Alphabet’s Waymo, Tesla, and Argo are the pretty face of autonomous vehicles, and consumer sentiment reports suggest the public is warming to the idea. But commercial trucking was where the money was in 2019. Carmaker Volvo is so confident in the viability of its smart trucks that it’s going to break out its driverless financials starting in 2020, although it faces competition from the likes of TuSimple, which is testing delivery for the U.S. Postal Service; Daimler, which is testing autonomous trucks in Virginia; and Starsky Robotics, which relies on remote teleoperators to run its test fleet.</p>



<p>Competition in the AI assistant market is still greatest between Amazon and Google, rivalry that has spurred the performance and capabilities of voice recognition and personal assistants. Amazon and Microsoft launched the Voice Interoperability Initiative in September, along with a slew of partners — absent Apple, Samsung, and Google — that seek to allow devices to run more than one assistant. There’s good reason for Microsoft to join, since it’s become clear its Cortana is not beating Amazon’s Alexa or Google Assistant anytime soon. But Samsung’s Bixby also stepped back in acknowledgment of its market position. As for which voice assistant is best, Google Assistant keeps coming out on top in accuracy tests, although in a May 2019 test by Tom’s Hardware, Alexa and Siri were not far behind.</p>



<p>AI went mainstream for grocery shopping in 2019. Walmart is using AI to improve online grocery ordering, employing machine intelligence to figure out what consumers are likely to need, and it’s begun using driverless vans to ferry goods between stores in Arkansas. Meanwhile, Microsoft helped grocery chain Kroger create cashierless stores using smart shelves and other intelligent technology, while the Giant Eagle chain turned to Grabango for its own AI trial.</p>



<h2 class="wp-block-heading">Governmental give-and-take</h2>



<p>Politicians and corporations clashed throughout 2019 over the appropriate use and oversight of AI. Famously, one of Democratic senator and presidential candidate Elizabeth Warren’s campaign promises is to break up big tech businesses like Google and Amazon. “With fewer competitors entering the market, the big tech companies do not have to compete as aggressively in key areas like protecting our privacy,” Warren’s campaign blog states. And her rivals brought up AI specifically in the Democratic debates.</p>



<p>Beyond the general concern of private-market encroachments upon personal freedoms, governments from the Massachusetts town of Somerville to the nation of the U.K. are examining how the public sector should use AI technology — and coming to differing conclusions.</p>



<p>While China has been using facial recognition to regulate cell phone accounts and allegedly round up the Uighur minority population, most governments in the U.S. that examine the technology do so to limit or ban its use. Especially in California, San Francisco and other cities are enacting bans on use of facial recognition by public entities, particularly police departments.</p>



<p>Detroit has no such ban, and indeed its police chief, James Craig, is enthusiastic about facial recognition’s potential to fight crime. This led to an August 2019 Twitter beef for the ages between Chief Craig and Detroit’s U.S. Congressional Representative, Rashida Tlaib (D-MI), that ended in an awkward demonstration of the technology that frustrated both sides.</p>



<p>The tide could turn if President Trump wins re-election in 2020, as his administration takes a more collaborative approach to AI. If the eventual Democratic nominee is Senator Bernie Sanders (D-VT), and he wins, Chief Craig might be out of luck. But legislation to regulate facial recognition has been surprisingly nonpartisan so far.</p>



<h2 class="wp-block-heading">Smart cities</h2>



<p>Despite privacy concerns over government use of AI, the technology can improve everyday life, especially if proper care is taken to consider the ethics.</p>



<p>The amount of data being created by smart cars, roadside cameras, public transit, and other sensors is overwhelming, but by feeding it into AI systems, companies like Waycare are helping cities predict and improve traffic flow. StreetLight Data takes a different approach: By tapping into cellphone location data, it can track and predict traffic for vehicles, bicycles, and pedestrians. London is using Waze to tackle traffic congestion in the city center and reduce air pollution. Elsewhere, Alphabet division Sidewalk Labs is helping Toronto push the envelope of smart city technology, fed by weather and usage patterns, to create a high-tech innovation district.</p>



<p>Norway has emerged as a hotbed for startups leveraging AI to build better cities. Oslo-based Spacemaker‘s software allows city planners to estimate the effect of each planning decision and optimize for a range of goals, courtesy of machine learning. And in August, the city of Trondheim unveiled Powerhouse, a smart office building designed to generate more energy than it consumes and apply the excess to powering other smart city tech, such as road monitoring.</p>



<p>As 2019’s projects come to fruition in 2020 and beyond, it will be interesting to watch how AI develops in the real world. Ethical oversight will be needed to make sure the technology continues to serve humanity, rather than the other way around.</p>



<p>Join us at VB Transform 2020, the AI event of the year for enterprise executives, brought to you by today’s leading AI publisher. You can purchase tickets below.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-year-in-review-opportunities-grow-but-ethics-loom-large/">AI year in review: Opportunities grow, but ethics loom large</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-year-in-review-opportunities-grow-but-ethics-loom-large/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google launches AutoML Natural Language with improved text classification and model training</title>
		<link>https://www.aiuniverse.xyz/google-launches-automl-natural-language-with-improved-text-classification-and-model-training/</link>
					<comments>https://www.aiuniverse.xyz/google-launches-automl-natural-language-with-improved-text-classification-and-model-training/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 14 Dec 2019 09:42:20 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[classification]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[model training]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5639</guid>

					<description><![CDATA[<p>Source: venturebeat.com Earlier this year, Google took the wraps off of AutoML Natural Language, an extension of its Cloud AutoML machine learning platform to the natural language <a class="read-more-link" href="https://www.aiuniverse.xyz/google-launches-automl-natural-language-with-improved-text-classification-and-model-training/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-automl-natural-language-with-improved-text-classification-and-model-training/">Google launches AutoML Natural Language with improved text classification and model training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Earlier this year, Google took the wraps off of AutoML Natural Language, an extension of its Cloud AutoML machine learning platform to the natural language processing domain. After a months-long beta, AutoML today launched in general availability for customers globally, with support for tasks like classification, sentiment analysis, and entity extraction, as well as a range of file formats, including native and scanned PDFs.</p>



<p>By way of refresher, AutoML Natural Language taps machine learning to reveal the structure and meaning of text from emails, chat logs, social media posts, and more. It can extract information about people, places, and events both from uploaded and pasted text or Google Cloud Storage documents, and it allows users to train their own custom AI models to classify, detect, and analyze things like sentiment, entities, content, and syntax. It furthermore offers custom entity extraction, which enables the identification of domain-specific entities within documents that don’t appear in standard language models.</p>



<p>AutoML Natural Language has over 5,000 classification labels and allows training on up to 1 million documents up to 10MB in size, which Google says makes it an excellent fit for “complex” use cases like comprehending legal files or document segmentation for organizations with large content taxonomies. It has been improved in the months since its reveal, specifically in the areas of text and document entity extraction — Google says that AutoML Natural Language now considers additional context (such as the spatial structure and layout information of a document) for model training and prediction to improve the recognition of text in invoices, receipts, resumes, and contracts.</p>



<p>Additionally, Google says that AutoML Natural Language is now FedRAMP-authorized at the Moderate level, meaning it has been vetted according to U.S. government specifications for data where the impact of loss is limited or serious. It says that this — along with newly introduced functionality that lets customers create a data set, train a model, and make predictions while keeping the data and related machine learning processing within a single server region — makes it easier for federal agencies to take advantage.</p>



<p>Already, Hearst is using AutoML Natural Language to help organize content across its domestic and international magazines, and Japanese publisher Nikkei Group is leveraging AutoML Translate to publish articles in different languages. Chicory, a third early adopter, tapped it to develop custom digital shopping and marketing solutions for grocery retailers like Kroger, Amazon, and Instacart.</p>



<p>The ultimate goal is to provide organizations, researchers, and businesses who require custom machine learning models a simple, no-frills way to train them, explained product manager for natural language Lewis Liu in a blog post. “Natural language processing is a valuable tool used to reveal the structure and meaning of text,” he said. “We’re continuously improving the quality of our models in partnership with Google AI research through better fine-tuning techniques, and larger model search spaces. We’re also introducing more advanced features to help AutoML Natural Language understand documents better.”</p>



<p>Notably, the launch of AutoML follows on the heels of AWS Textract, Amazon’s machine learning service for text and data extraction, which debuted in May. Microsoft offers a comparable service in Azure Text Analytics.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-automl-natural-language-with-improved-text-classification-and-model-training/">Google launches AutoML Natural Language with improved text classification and model training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-launches-automl-natural-language-with-improved-text-classification-and-model-training/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Announces Updates to AutoML Vision Edge, AutoML Video, and the Video Intelligence API</title>
		<link>https://www.aiuniverse.xyz/google-announces-updates-to-automl-vision-edge-automl-video-and-the-video-intelligence-api/</link>
					<comments>https://www.aiuniverse.xyz/google-announces-updates-to-automl-vision-edge-automl-video-and-the-video-intelligence-api/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 24 Oct 2019 13:03:34 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google Cloud]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4850</guid>

					<description><![CDATA[<p>Source: infoq.com In a recent blog post, Google announced enhancements to a part of its Vision AI portfolio &#8211; AutoML Vision Edge, AutoML Video, and the Video <a class="read-more-link" href="https://www.aiuniverse.xyz/google-announces-updates-to-automl-vision-edge-automl-video-and-the-video-intelligence-api/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-announces-updates-to-automl-vision-edge-automl-video-and-the-video-intelligence-api/">Google Announces Updates to AutoML Vision Edge, AutoML Video, and the Video Intelligence API</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>In a recent blog post, Google announced enhancements to a part of its Vision AI portfolio &#8211; AutoML Vision Edge, AutoML Video, and the Video Intelligence API each received updates to enhance their capabilities.</p>



<p>Both AutoML Vision Edge and AutoML Video were both introduced earlier this year, in April, as a part of Google’s AI Platform &#8211; while the Video Intelligence API introduction dates back a few years prior with a public beta release in June 2017. All received enhancements provide customers with more features, as both Google product managers Vishy Tirumalashetty and Andrew Schwartz in the blog post state:</p>



<p>We’re constantly inspired by all the ways our customers use Google Cloud AI for image and video understanding—everything from eBay&#8217;s use of image search to improve their shopping experience to AES leveraging AutoML Vision to accelerate a greener energy future and help make their employees safer.</p>



<p>With AutoML Vision Edge, developers can train, build and deploy ML models at the edge, beginning initially with image classification – and can now also perform object detection. Moreover, developers can now do both operations on edge devices, including those using ARM, NVIDIA GPUs, or other chipsets and running operating systems such as Android and iOS. The object detection is useful, according to the blog post, for use cases such as identifying pieces of an outfit in a shopping app, detecting defects on a fast-moving conveyor belt, or assessing inventory on a retail shelf.</p>



<p>The other updates are in AutoML Video and the Video Intelligence API. AutoML Video is a toolset designed to make it easier for users to train video-parsing AI models – and now it can track the movement of multiple items between frames through object detection. With object detection in AutoML Video, developers can create applications for tracking management, robotic navigation, and so on.</p>



<p>Furthermore, the Video Intelligence API, a part of AutoML Video, offers developers pre-trained machine learning models that automatically recognize a vast number of objects, scenes, and actions in stored and streaming video. This API now has a so-called Video Intelligence Logo Recognition feature, providing detection and recognition of logos from more than a 100,000 popular businesses and organizations in stored and streaming clips &#8211; which according to the blog post is useful for brand safety, ad placement, and sports sponsorship use cases.  </p>
<p>The post <a href="https://www.aiuniverse.xyz/google-announces-updates-to-automl-vision-edge-automl-video-and-the-video-intelligence-api/">Google Announces Updates to AutoML Vision Edge, AutoML Video, and the Video Intelligence API</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-announces-updates-to-automl-vision-edge-automl-video-and-the-video-intelligence-api/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why do we need citizen data scientists?</title>
		<link>https://www.aiuniverse.xyz/why-do-we-need-citizen-data-scientists/</link>
					<comments>https://www.aiuniverse.xyz/why-do-we-need-citizen-data-scientists/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Jul 2019 12:06:53 +0000</pubDate>
				<category><![CDATA[Google Cloud AutoML]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[citizen]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[DataRobo]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4052</guid>

					<description><![CDATA[<p>Source: pesmedia.com With the industry in the midst of a skills shortage, businesses are struggling to fill the gap. Here Ramya Sriram, digital content manager of online platform <a class="read-more-link" href="https://www.aiuniverse.xyz/why-do-we-need-citizen-data-scientists/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-do-we-need-citizen-data-scientists/">Why do we need citizen data scientists?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: pesmedia.com</p>



<p>With the industry in the midst of a skills shortage, businesses are struggling to fill the gap. Here <strong>Ramya Sriram</strong>, digital content manager of online platform for freelance scientists Kolabtree, discusses the rise of the citizen data scientist.</p>



<p>The incessancy of big data shows no sign of slowing down – so much so that if businesses want to glean actionable insights from their data, they must now turn to specialised predictive analytics.</p>



<p>Machine learning and artificial intelligence (AI) are also helping companies to make informed decisions and to predict customer behaviour. However, because there are not enough data scientists available to fill demand, companies are having to bridge the gap themselves.</p>



<p>The lack of data scientists is not the only issue holding businesses back. It is expensive to hire a full-time data scientist, which means start-ups and small and medium sized enterprises (SMEs) often cannot afford to hire highly trained people to join their in-house teams. This is a particular problem if the company doesn’t need a full-time data scientist but requires help with a one-off project writing an algorithm, building a recommendation engine or designing a predictive model.</p>



<p><strong>The rise of the citizen data scientist</strong></p>



<p>Gartner coined the term citizen data scientist to describe a person who generates models that leverage predictive or prescriptive analytics, but whose primary job function is outside the field of statistics and analytics.</p>



<p>Employees can be upskilled and trained to do data analytics, using only a small amount of business resources, to do some of the tasks previously only done by a data scientist, statistician or mathematician. Citizen data scientists rely on visualisation and other automated tools, such as DataRobot and Google Cloud AutoML, to make it easier for them to write algorithms and build models.</p>



<p>While they do not have all the specialised skills of a traditional data scientist, citizen data scientists bring their own, industry-specific knowledge to the analysis. Whether the employee works in sales, marketing, finance, human resources or research, they will have a detailed understanding of the challenges facing their department and their industry. This domain expertise will come in useful when deriving insights in their field.</p>



<p><strong>Does this mean the end of the traditional data scientist?</strong></p>



<p>Citizen data scientists do not replace the traditional data scientist, merely complement them. Companies will still require highly qualified data scientists, particularly for complex and specialised analysis of data or for a large amount of data that requires specific tools and accuracy. By working together, citizen data scientists and traditional data scientists can strengthen insights and ensure accuracy.</p>



<p>So, what can businesses do if they need to hire a data scientist, but don’t have the resources to hire one full time? Thankfully, there is another option. Companies can hire a freelance data scientist, to access specialised data science skills on-demand. A freelance data scientist can help a company with a one-off project by giving them access to the required skills independent of geographical location.</p>



<p>Citizen data scientists present a good solution for companies building models and using predictive analytics, because they reduce the strain on data scientists and minimise resource use. However, businesses still require the specialist skills of traditional data scientists. To access specialist skills, consider hiring a freelance data scientist, who can help with your project for a manageable, affordable cost.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-do-we-need-citizen-data-scientists/">Why do we need citizen data scientists?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-do-we-need-citizen-data-scientists/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
