<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Developers Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/developers/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/developers/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 07 Jun 2021 05:10:53 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>RPA DEVELOPERS AND DATA SCIENTISTS: THE IDEAL TEAM!</title>
		<link>https://www.aiuniverse.xyz/rpa-developers-and-data-scientists-the-ideal-team/</link>
					<comments>https://www.aiuniverse.xyz/rpa-developers-and-data-scientists-the-ideal-team/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 07 Jun 2021 05:10:51 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[IDEAL]]></category>
		<category><![CDATA[RPA]]></category>
		<category><![CDATA[TEAM]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14051</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Data scientists&#160;and&#160;RPA developers&#160;should collaborate to make a perfect team. If  RPA developers work with data scientists, it will facilitate more creative solutions to complex business problems; than working separately. Robotic process automation&#160;(RPA) is a cost-effective way of automating basic tasks as humans do, with the help of various hardware and software systems that can perform on <a class="read-more-link" href="https://www.aiuniverse.xyz/rpa-developers-and-data-scientists-the-ideal-team/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/rpa-developers-and-data-scientists-the-ideal-team/">RPA DEVELOPERS AND DATA SCIENTISTS: THE IDEAL TEAM!</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading"><strong>Data scientists</strong>&nbsp;and&nbsp;<strong>RPA developers</strong>&nbsp;should collaborate to make a perfect team.</h2>



<p>If  RPA developers work with data scientists, it will facilitate more creative solutions to complex business problems; than working separately.</p>



<p>Robotic process automation&nbsp;(RPA) is a cost-effective way of automating basic tasks as humans do, with the help of various hardware and software systems that can perform on different applications. RPA also focuses on the manual processing of data to gather more information for the company. Applying data analysis to this RPA-generated data can help the businesses gain a deeper understanding of the improvement opportunities, different business structures, and models, and help meet customer demands faster.</p>



<p>RPA and data science have always shared a mutually beneficial relationship. RPA tools integrated on the insights drawn from the data analysis, and the predictive models of data science were programmed to enhance the capability of these tools.</p>



<p>The further advancement of&nbsp;robotic process automation&nbsp;into the realm of&nbsp;data science&nbsp;will prove a remarkable transformation for business enterprises since they will gather more data in a cost-effective and non-invasive manner.</p>



<p>The skills&nbsp;RPA developers&nbsp;and&nbsp;data scientists&nbsp;possess are different but they complement each other. To understand why they should collaborate, let us look at the roles and responsibilities of data scientists and RPA developers.</p>



<h4 class="wp-block-heading"><strong>Role of an RPA developer</strong></h4>



<p>The primary responsibility of an RPA developer is designing, innovating, and implementing new RPA systems.&nbsp;Other responsibilities include:</p>



<ul class="wp-block-list"><li>Enabling high-quality automation using quality assurance (QA) processes and preventing potential complexities.</li><li>Design business processes for automation.</li><li>Develop process documentation to refine business processes by highlighting mistakes and successes simultaneously.</li><li>Provide instructions and guidance for process designing.</li></ul>



<h4 class="wp-block-heading"><strong>Role of a</strong>&nbsp;<strong>Data Scientist</strong></h4>



<p>A data scientist analyzes and handles vast amounts of information to find patterns, customer behavior, trends, and potential risks in the market.&nbsp;Other responsibilities are:</p>



<ul class="wp-block-list"><li>To implement data science techniques like machine learning, artificial intelligence, and statistical models to gain data for the company.</li><li>Understand and select correct potential models and algorithms for different business tasks.</li><li>Cooperate with engineering and product development teams to produce solutions and strategies for complex business problems.</li><li>Develop predictive models and machine learning algorithms.</li></ul>



<h4 class="wp-block-heading"><strong>How the two teams complement each other?</strong></h4>



<p>The skill-set that a data scientist possesses differs from an RPA developer. They have different temperaments since their workflow and timelines are very different. When the workflow divulges, so do the mindsets and it affects the communication between the two teams.</p>



<p>But RPA developers can generate more complex processes working with the data science team than working alone. Business organization leaders should understand the potential outcomes and encourage RPA developers to communicate with data scientists.</p>



<p>A forward-thinking business organization will not compromise between two valuable teams, instead align them. RPA’s automation of data science allows the generation of models and selecting the most suitable model for unique business tasks. On the other side, these features enable data scientists to invest more time in other important tasks and develop creative models to provide analytical solutions for critical business problems. Bottom line, combining these two teams will not only enhance productivity but also amplify business growth.</p>
<p>The post <a href="https://www.aiuniverse.xyz/rpa-developers-and-data-scientists-the-ideal-team/">RPA DEVELOPERS AND DATA SCIENTISTS: THE IDEAL TEAM!</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/rpa-developers-and-data-scientists-the-ideal-team/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>FICO Xpress Insight Empowers 8+ Million Python Developers to Foster Collaboration Between Data Scientists and Business Users, Drastically Accelerating Project Deployment</title>
		<link>https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/</link>
					<comments>https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 02 Mar 2021 11:18:04 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[8+ Million]]></category>
		<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[Empowers]]></category>
		<category><![CDATA[FICO Xpress]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13172</guid>

					<description><![CDATA[<p>Source &#8211; https://www.prnewswire.com/ Using FICO Xpress Insight, Python Developers Can Help Business Leaders Make More Informed, Data-Driven Decisions Highlights: The addition of native Python support to FICO® Xpress Insight enables Python&#8217;s 8.2 million users to empower business professionals with easy-to-use applications that can execute sophisticated analytic models. With FICO® Xpress Insight business users and analysts can work with any <a class="read-more-link" href="https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/">FICO Xpress Insight Empowers 8+ Million Python Developers to Foster Collaboration Between Data Scientists and Business Users, Drastically Accelerating Project Deployment</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.prnewswire.com/</p>



<p>Using FICO Xpress Insight, Python Developers Can Help Business Leaders Make More Informed, Data-Driven Decisions</p>



<p><strong>Highlights:</strong></p>



<ul class="wp-block-list"><li>The addition of native Python support to FICO® Xpress Insight enables Python&#8217;s 8.2 million users to empower business professionals with easy-to-use applications that can execute sophisticated analytic models.</li><li>With FICO® Xpress Insight business users and analysts can work with any advanced analytic model in business terms to perform simulations, compare scenarios and visualize outcomes to make better informed decisions.</li><li>Python developers can now build and operationalize Python based models all within a single framework, reducing time to deployment by orders of magnitude.</li></ul>



<p>FICO, a global analytics leader, today announced it has added native Python support to FICO® Xpress Insight. Xpress Insight enables data scientists to quickly build and deploy any advanced analytic or optimization model as a powerful business application. </p>



<p>Across industries, data scientists create powerful models to solve complex business problems. Yet, according to most industry analysts, more than half of data science projects are never fully deployed. FICO Xpress Insight helps translate between the data scientist and the line of business user by taking highly complex analytic or optimization models and turning them into simple point and click applications that help them make real business decisions.</p>



<p>&#8220;It&#8217;s a huge waste of time and resources when the models don&#8217;t reach the intended business users,&#8221; said&nbsp;<strong>Bill Waid, vice president and general manager of FICO® Decision Management Suite</strong>. &#8220;The real value of highly complex analytic models comes when they&#8217;re in the hands of the business users and become an integral part of their decision making process.&#8221;</p>



<p>The newest update to Xpress Insight enables the 8.2 million developers that use the popular coding language Python to rapidly build business applications that can execute sophisticated analytic models. Getting analytic applications into the hands of business users allows business leaders to make more informed decisions, perform simulations, compare scenarios and visualize outcomes.</p>



<p>&#8220;Python is one of the most productive tools for our digital and data science initiatives. Having Python supported in Xpress Insight helps us smoothly build end-to-end applications from data preparation, to modeling, to visualization,&#8221; said&nbsp;<strong>Sitao Zhang, data scientist, Supply Chain Digital and Data Science, Johnson and Johnson</strong>. &#8220;This new feature has not only made our applications more coherent during programming and development, but it also helps us deliver a more user-friendly experience to the business professionals using the applications to drive results.&#8221;</p>



<p>FICO® Xpress Insight is part of the FICO® Platform, a decisioning foundation critical for enterprises&#8217; digital transformation. The platform is designed to eliminate data siloes and enable interoperability between enterprise applications. By connecting data-derived insights from disparate business units, enterprises can respond quickly to customers&#8217; immediate needs and anticipate their future demands, resulting in deeper, more engaging customer experiences.</p>
<p>The post <a href="https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/">FICO Xpress Insight Empowers 8+ Million Python Developers to Foster Collaboration Between Data Scientists and Business Users, Drastically Accelerating Project Deployment</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What makes an effective microservices logging strategy?</title>
		<link>https://www.aiuniverse.xyz/what-makes-an-effective-microservices-logging-strategy/</link>
					<comments>https://www.aiuniverse.xyz/what-makes-an-effective-microservices-logging-strategy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 24 Dec 2020 06:23:09 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[logging]]></category>
		<category><![CDATA[Strategy]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12475</guid>

					<description><![CDATA[<p>Source: theserverside.com An effective microservices logging strategy can hinge on the size and scale of the system in question. For example, a microservices-oriented architecture composed of 20 microservices is less of a logging burden when compared to one composed of 200 microservices. Developers who hope to introduce a successful microservices logging strategy need to craft <a class="read-more-link" href="https://www.aiuniverse.xyz/what-makes-an-effective-microservices-logging-strategy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-makes-an-effective-microservices-logging-strategy/">What makes an effective microservices logging strategy?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: theserverside.com</p>



<p>An effective microservices logging strategy can hinge on the size and scale of the system in question. For example, a microservices-oriented architecture composed of 20 microservices is less of a logging burden when compared to one composed of 200 microservices.</p>



<p>Developers who hope to introduce a successful microservices logging strategy need to craft a plan that lays out where the logging takes place and how it affects other areas of the system. Typical logging can stress a system in three ways: I/O, storage and analytic computation on the CPU.</p>



<p>Before a team deploys a microservices logging strategy, it must consider the potential stresses on the system, and what it might mean to additional development. Let&#8217;s examine ways to alleviate system stresses and some alternatives to traditional microservices logging strategies.</p>



<h3 class="wp-block-heading">Log data on the machine</h3>



<p>The easiest way to introduce logging on a microservices-oriented architecture is to have each microservice collect and store its logging data on the machine where it runs. This is probably the easiest and simplest approach to logging, but it&#8217;s also one fraught with danger.</p>



<p>Storage on a local machine significantly reduces I/O latency because all the activity takes place at a singular location. There are few, if any, trips out to the network. While logging data on the local machine helps improve performance, there is a tradeoff.</p>



<p>Increased storage places a significantly higher burden on the host machine&#8217;s CPU. Higher levels of activity result in more logging, which in turn creates more log data stored in the system and raises CPU utilization levels. A host machine can be maxed out in no time under this scenario.</p>



<p>Luckily, there are alternatives to this microservices logging strategy.</p>



<h3 class="wp-block-heading">Logging services</h3>



<p>A logging service can help alleviate concerns of CPU utilization and reduced I/O latency.</p>



<p>A logging system&#8217;s main benefit is that the storage and work is moved off the system and onto third-party resources. All the microservice needs to do is take a trip out to the network to send log entries.</p>



<p>While this doesn&#8217;t seem like a big deal on smaller architectures, it can be problematic if there are 200 microservices that run in a high-availability, multi-replica environment. In a situation like this, many trips to the network from many origins can cause a bottleneck and bring other network communication to a grinding halt.</p>



<p>In this case, there is another alternative that developer teams should consider.</p>



<h3 class="wp-block-heading">The collector strategy</h3>



<p>A collector strategy essentially shifts how log entries are sent in and out of the network.</p>



<p>Instead of sending each log entry out to a logging service, they are sent to a central collector that resides on a machine elsewhere. In most cases this machine is at least in the same data center of the microservices-oriented architecture. In a best-case scenario, the machine is located on the same data center rack that hosts the other microservices components.</p>



<p>Cloud-hosted microservices users will have to consult with their provider on identifying the best place to host the collector.</p>



<p>The collector does as its name implies. It collects all the log entries emitted from the architecture and then forwards the entries onto a logging service at a prescribed interval. Once the entries arrive, the collector flushes the old log entries from the system to backup storage.</p>



<p>One major benefit to this microservices logging strategy is that the collector absorbs the network latency incurred when it sends the log entries onto the logging service. Also, because the collector is close to the other components of the microservices-oriented architecture, it reduces latency between the architecture and the collector.</p>



<p>However, there are still risks associated with a log collector. For example, if the central log collector fails, all logging activity comes to a standstill.</p>



<p>So, how can developers avoid this risk?</p>



<h3 class="wp-block-heading">Collector clusters</h3>



<p>Developers can create a cluster of collectors that resides behind a common load-balancer to alleviate failed central log collector concerns.</p>



<p>A benefit of the load-balanced log collector strategy is that if one collector fails, the others will remain operational and allow logging to continue. But there is a tradeoff.</p>



<p>This strategy requires a team to support a set of collectors on their network, and in turn, adds more expenses related to the increase of virtual machines. Also, the logging environment becomes more complex and requires more legwork from other microservices in the environment.</p>



<p>Overall, the main crux of the microservices logging conundrum is scale. If you run a smaller architecture without a lot of logging, keep the logging activity on the same microservices elements. However, if you run an architecture with a lot of microservices, a more sophisticated logging strategy makes more sense, despite some potential drawbacks with cost and storage.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-makes-an-effective-microservices-logging-strategy/">What makes an effective microservices logging strategy?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-makes-an-effective-microservices-logging-strategy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The CAP theorem, and how it applies to microservices</title>
		<link>https://www.aiuniverse.xyz/the-cap-theorem-and-how-it-applies-to-microservices/</link>
					<comments>https://www.aiuniverse.xyz/the-cap-theorem-and-how-it-applies-to-microservices/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 11 Dec 2020 05:12:26 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[application]]></category>
		<category><![CDATA[Databases]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Microservice]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12411</guid>

					<description><![CDATA[<p>Source: searchapparchitecture.techtarget.com It&#8217;s not unusual for developers and architects who jump into microservices for the first time to &#8220;want it all&#8221; in terms of performance, uptime and resiliency. After all, these are the goals that drive a software team&#8217;s decision to pursue this type of architecture design. The unfortunate truth is that trying to create <a class="read-more-link" href="https://www.aiuniverse.xyz/the-cap-theorem-and-how-it-applies-to-microservices/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-cap-theorem-and-how-it-applies-to-microservices/">The CAP theorem, and how it applies to microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: searchapparchitecture.techtarget.com</p>



<p>It&#8217;s not unusual for developers and architects who jump into microservices for the first time to &#8220;want it all&#8221; in terms of performance, uptime and resiliency. After all, these are the goals that drive a software team&#8217;s decision to pursue this type of architecture design. The unfortunate truth is that trying to create an application that perfectly embodies all of these traits will eventually steer them to failure.</p>



<p>This phenomenon is summed up in something called the CAP theorem, which states that a distributed system can deliver only two of the three overarching goals of microservices design: consistency, availability and partition tolerance. According to CAP, not only is it impossible to &#8220;have it all&#8221; &#8212; you may even struggle to deliver more than one of these qualities at a time.</p>



<p>When it comes to microservices, the CAP theorem seems to pose an unsolvable problem. Which of these three things can you afford to trade away? However, the essential point is that you don&#8217;t have a choice. You&#8217;ll have to face that fact when it comes to your design stage, and you&#8217;ll need to think carefully about the type of application you&#8217;re building, as well as its most essential needs.</p>



<p>In this article, we&#8217;ll review the basics of how the CAP theorem applies to microservices, and then examine the concepts and guidelines you can follow when it&#8217;s time to make a decision.</p>



<h3 class="wp-block-heading">CAP theory and microservices</h3>



<p>Let&#8217;s start by reviewing the three qualities CAP specifically refers to:</p>



<ul class="wp-block-list"><li><strong>Consistency</strong> means that all clients see the same data at the same time, no matter the path of their request. This is critical for applications that do frequent updates.</li><li><strong>Availability</strong> means that all functioning application components will return a valid response, even if they are down. This is particularly important if an application&#8217;s user population has a low tolerance for outages (such as a retail portal).</li><li><strong>Partition</strong> <strong>tolerance</strong> means that the application will operate even during a network failure that results in lost or delayed messages between services. This comes into play for applications that integrate with a large number of distributed, independent components.</li></ul>



<p>Databases often sit at the center of the CAP problem. Microservices often rely on NoSQL databases, since they&#8217;re designed to scale horizontally and support distributed application processes. And, partition tolerance is a &#8220;must have&#8221; in these types of systems because they are so sensitive to failure.</p>



<p>You can certainly design these kinds of databases for consistency and partition tolerance, or even for availability and partitioning. But designing for consistency and availability just isn&#8217;t an option.</p>



<h2 class="wp-block-heading">The PACELC theorem</h2>



<p>This prohibitive requirement for partition-tolerance in distributed systems gave rise to what is known as the PACELC theorem, a sibling to the CAP theorem. The acronym PACELC stands for &#8220;if partitioned, then availability and consistency; else, latency and consistency.&#8221; In other words: If there is a partition, the distributed system must trade availability for consistency; if not, the choice is between latency and consistency.</p>



<p>Designing your applications specifically to avoid partitioning problems in a distributed system will force you to sacrifice either availability or user experience to retain operational consistency. However, the key term here is &#8220;operational&#8221; &#8212; while latency is a primary concern during normal operations, a failure can quickly make availability the overall priority. So, why not create models for both scenarios?</p>



<p>It may help to frame CAP concepts in both &#8220;normal&#8221; and &#8220;fault&#8221; modes, provided that faults in a distributed system are essentially inevitable. This enables you to create two database and microservices implementation models: one that handles normal operation, and another that kicks in during failures. For example, you can design your database to optimize consistency during a partition failure, and then continue to focus on mitigating latency during normal operation.</p>



<h3 class="wp-block-heading">Applying PACELC to microservices</h3>



<p>If we use PACELC rather than &#8220;pure CAP&#8221; to define databases, we can classify them according to how they make the trades.</p>



<ul class="wp-block-list"><li>In PACELC terms, relational database management systems and NoSQL databases that implement ACID (atomicity, consistency, isolation, urability) are designed to assure consistency, classifying them as PC/EC. Typical business applications, like human resources apps and ticketing systems, will likely use this model, particularly if there are multiple users using different component instances. Google&#8217;s Bigtable database is a good example of this.</li><li>In-memory databases like MongoDB and Hazelcast fit into a PA/EC model, which is best suited for things like e-commerce apps, which need high availability even during network or component failures.</li><li>Real-time applications, such as IoT systems, fit into the PC/EL model that databases like PNUTS provide. This is the case in any application where consistency across replications is critical.</li><li>Database systems based on the PA/EL model, such as Dynamo and Cassandra, are best for real-time applications that don&#8217;t experience frequent updates, since consistency will be less of an issue.</li></ul>



<h3 class="wp-block-heading">Know the tradeoffs</h3>



<p>The bottom line is this: It&#8217;s critical to know exactly what you&#8217;re trading in a PACELC-guided application, and to know which scenarios call for which sacrifice. Here are three things to remember when making your decision:</p>



<ul class="wp-block-list"><li><strong>Consistency</strong>&nbsp;is most valuable where many users update the same data elements.</li><li><strong>Availability</strong>&nbsp;is critical for applications involving consumers (who get frustrated easily) and also for some IoT applications.</li><li><strong>Latency</strong>&nbsp;is most likely critical for real-time and&nbsp;<a href="https://internetofthingsagenda.techtarget.com/definition/Internet-of-Things-IoT">IoT</a>&nbsp;applications where processing delays must be kept to a minimum.</li></ul>



<p>Make your database choice wisely. Then, design your microservices workflows and framework to ensure you don&#8217;t compromise your goals.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-cap-theorem-and-how-it-applies-to-microservices/">The CAP theorem, and how it applies to microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-cap-theorem-and-how-it-applies-to-microservices/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Unlock a new career in Google Cloud with this mastery bundle</title>
		<link>https://www.aiuniverse.xyz/unlock-a-new-career-in-google-cloud-with-this-mastery-bundle/</link>
					<comments>https://www.aiuniverse.xyz/unlock-a-new-career-in-google-cloud-with-this-mastery-bundle/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 14 Oct 2020 06:45:57 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI applications]]></category>
		<category><![CDATA[AI technology]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[Google Cloud]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12210</guid>

					<description><![CDATA[<p>Source: androidguys.com You may not realize this, but you interact with AI technology on a consistent, if not daily basis. And if you do recognize it, chances are good that you take it for granted. Whether it’s a Spotify playlist, an Alexa reply, or one of the myriad cool things Google Assistant does, it’s powered <a class="read-more-link" href="https://www.aiuniverse.xyz/unlock-a-new-career-in-google-cloud-with-this-mastery-bundle/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/unlock-a-new-career-in-google-cloud-with-this-mastery-bundle/">Unlock a new career in Google Cloud with this mastery bundle</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: androidguys.com</p>



<p>You may not realize this, but you interact with AI technology on a consistent, if not daily basis. And if you do recognize it, chances are good that you take it for granted. Whether it’s a Spotify playlist, an Alexa reply, or one of the myriad cool things Google Assistant does, it’s powered by AI and cloud technology.</p>



<p>More and more, companies are turning to cloud technology for AI applications, and that means the demand for developers and architects is steadily rising.</p>



<p>The Google Cloud Platform, one of the largest in the space, is a suite of computing services and tools that power Google’s Search, YouTube, and much more. According to Glassdoor, a GCP Cloud Architect can pull in a starting salary of $120,000- $160,000. Ready for a piece of that?</p>



<p>Google Cloud computing isn’t exactly something you master overnight. Hell, it could take you weeks just to form a basic understanding of it. It takes time to learn topics like deploying and implementing cloud solutions, software-defined networking, or virtual private clouds.</p>



<p>Fortunately, you can kick-start your education with some online training. Take the Google Cloud Certifications Practice Tests + Courses Bundle, for instance. This comprehensive online training features 43 hours of lectures and other tools to help prepare you for a career in the emerging field.</p>



<p>Sign up, and you’ll get lifetime access to the training so feel free to really dig in and learn things. Or, if you’re like many of us, drop in and out and spend the rest of the pandemic period fine-tuning yourself.</p>



<p>Considering how incredibly valuable the information in this 7-course bundle is, $29.99 is a small price to pay. It’s worth more than $630 if you were to purchase yourself, but we’d never let you pay that much.</p>



<h3 class="wp-block-heading">Save even more!</h3>



<p>In addition to the savings above, when you buy through AndroidGuys Deals, for every $25 spent, you get $1 credit added to your account. What’s more, should you refer the deal via social media or an email that results in a purchase, you’ll earn $10 credit in your account.</p>
<p>The post <a href="https://www.aiuniverse.xyz/unlock-a-new-career-in-google-cloud-with-this-mastery-bundle/">Unlock a new career in Google Cloud with this mastery bundle</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/unlock-a-new-career-in-google-cloud-with-this-mastery-bundle/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Programming languages: Julia users most likely to defect to Python for data science</title>
		<link>https://www.aiuniverse.xyz/programming-languages-julia-users-most-likely-to-defect-to-python-for-data-science/</link>
					<comments>https://www.aiuniverse.xyz/programming-languages-julia-users-most-likely-to-defect-to-python-for-data-science/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 27 Aug 2020 06:53:42 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[Programming Languages]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11271</guid>

					<description><![CDATA[<p>Source: zdnet.com The open-source project behind Julia, a programming language for data scientists, has revealed which languages users would shift to if they decided no longer to use Julia. Julia, a zippy programming language that has roots at MIT, has published the results of its 2020 annual user survey. The study aims to uncover the <a class="read-more-link" href="https://www.aiuniverse.xyz/programming-languages-julia-users-most-likely-to-defect-to-python-for-data-science/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/programming-languages-julia-users-most-likely-to-defect-to-python-for-data-science/">Programming languages: Julia users most likely to defect to Python for data science</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: zdnet.com</p>



<p>The open-source project behind Julia, a programming language for data scientists, has revealed which languages users would shift to if they decided no longer to use Julia.</p>



<p>Julia, a zippy programming language that has roots at MIT, has published the results of its 2020 annual user survey. The study aims to uncover the preferences of those who are building programs in the language. This year, the survey attracted 2,565 Julia users and developers, up from 1,844 participants in 2019.</p>



<p>Python, a language that&#8217;s developed a strong affinity with data scientists for machine-learning applications, is overwhelmingly the language that Julia developers would turn to if they needed another language.</p>



<p>Regardless of which popularity index you look at, Python is in the top three, and its popularity is being driven by data scientists and a growing demand for machine-learning applications, plus a wealth of Python modules that helps extend its use in various fields.</p>



<p>But Julia, which developer analyst firm RedMonk has rated as a language to watch, does have decent support behind it too. Besides Julia Computing, the commercial side of the language, there is the Julia Lab at MIT&#8217;s Computer Science and AI Laboratory (CSAIL) and an open-source community gunning for its long-term success.</p>



<p>Last year, 73% of Julia users said they would use Python if they weren&#8217;t using Julia, but this year 76% nominated Python as the other language.</p>



<p>MATLAB, another Julia rival in statistical analysis, saw its share of Julia users as a top alternative language drop from 35% to 31% over the past year, but C++ saw its share on this metric rise from 28% to 31%.</p>



<p>Meanwhile, R, a popular statistical programming language with a dedicated crowd, also declined from 27% to 25%.</p>



<p>Some of these trends look positive for the long-term survival of Julia despite the threat posed by Python as the go-to language for data scientists.</p>



<p>The most frequently used languages after Julia are Python, and then Bash/Shell/PowerShell. And if Julia, which emerged in 2012, didn&#8217;t exist, most Julia users would be using C++, MATLAB, R, C, Fortran, Bash/Shell/PowerShell and Mathematica.</p>



<p>Julia users also revealed what they love and hate about the programming language, which Julia&#8217;s supporters claim is faster than Python and R for big-data analysis using CSV files for tasks like looking at stock-price states and analyzing mortgage risk.</p>



<p>Among the most-liked features include speed and performance, ease of use, its open-source status, and its ability to solve problems around using two languages. Non-technical reasons in its favor are that it&#8217;s free, has an active community of developers, and that it is available under an MIT license, while creating packages for Julia is supposedly easy to do.</p>



<p>The negatives that Julia users report are that it&#8217;s too slow to generate a first plot and has slow compile times. Also, there are complaints that packages aren&#8217;t mature enough – a key differentiator to the Python ecosystem – and that developers can&#8217;t generate self-contained binaries or libraries.</p>



<p>Julia is also suffering from an adoption obstacle due to colleagues and collaborators using other languages. Rust, another modern language that&#8217;s become popular for systems programming, is experiencing similar adoption obstacles with users because the companies they work at don&#8217;t use it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/programming-languages-julia-users-most-likely-to-defect-to-python-for-data-science/">Programming languages: Julia users most likely to defect to Python for data science</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/programming-languages-julia-users-most-likely-to-defect-to-python-for-data-science/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why GitOps Is Becoming Important For Developers</title>
		<link>https://www.aiuniverse.xyz/why-gitops-is-becoming-important-for-developers/</link>
					<comments>https://www.aiuniverse.xyz/why-gitops-is-becoming-important-for-developers/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 27 Aug 2020 06:25:11 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[GitOps]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11261</guid>

					<description><![CDATA[<p>Source: analyticsindiamag.com When we look back into the history of web operations, it required a server room with people handling all the hardware. We had to plan well ahead to buy enough hardware to be able to scale applications. There were rules to access the server rooms and other security measures such as firewalls. It <a class="read-more-link" href="https://www.aiuniverse.xyz/why-gitops-is-becoming-important-for-developers/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-gitops-is-becoming-important-for-developers/">Why GitOps Is Becoming Important For Developers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsindiamag.com</p>



<p>When we look back into the history of web operations, it required a server room with people handling all the hardware. We had to plan well ahead to buy enough hardware to be able to scale applications. There were rules to access the server rooms and other security measures such as firewalls. It involved manual deployments and infra management.&nbsp;</p>



<p>Then we saw the rise of DevOps, which was a major buzzword in the industry. Now, there is a new term called GitOps, which is ruling the microservices and container-based platforms.&nbsp;</p>



<p>Before getting into what GitOps is, we know Git is a distributed version control system where developers can manage the source code of their applications. It has text files, certificate files or configuration files which can be maintained inside Git. Developers also use it for collaborating with different members in the team, and efficiently manage code. Now, looking at the Ops part, the term comes from DevOps which is used to release, deploy, operate and monitor code as a part of the operations. </p>



<h3 class="wp-block-heading">The Rise Of GitOps</h3>



<p>GitOps is the operational practice which uses Git as a single source of truth. It is to be noted that source control repository on Git becomes thesource of truth and not the actual servers or the clusters etc. This is nothing but having your infrastructure as a code, which means all your infrastructure setup is inside a codebase. It also includes the automation of deployments, rollbacks and more. Git repo can be leveraged for version controlling system, peer-reviewing system, automating and deploying process for the production environment.&nbsp;</p>



<p>Using Git itself, developers are now doing continuous delivery and automated pipelines. Additionally, the webhooks from the Git can be leveraged to push these configurations into the Dev and test environments. Once you merge that particular pull request onto the main branch, the deployment to production happens. </p>



<h3 class="wp-block-heading">How GitOps Is Maintained</h3>



<p>GitOps allows automating everything using pipelines and deploying that to production once you merge the code into your production branch. It is called GitOps because all the configurations are managed in the Git repository.&nbsp;</p>



<p>Many developers deploy the infrastructure code as well as a part of their automation process by using only one repository for an application or a service and have a separate repository for each of them. Let’s say you have ten microservices that basically means you have ten GitOps repositories which will have the infrastructure code. Moreover, GitOps demand that developers have separate branches for each environment.</p>



<p>Let’s say you have three environments, namely dev, test and prod, there should be three branches — dev, test and prod. This is done to map each of these branches to different environments in the Kubernetes cluster. Once you push the changes on to that particular branch, there will be a relevant automated pipeline which will be set up. This means that whenever there is a change for that specific branch, the pipeline deploys to that environment. It also identifies, tests and verifies that the environment looks all right.&nbsp;&nbsp;</p>



<p>This way when a developer makes a change in their dev branch and once the dev branch succeeds, they will be able to merge pull requests in order to join it to the production branch. And once you click on merge, that is when it will deploy to the production environment. If you want to do a rollback, you can create another pull request to roll back to that particular previous state of the branch.&nbsp;</p>



<p>So, if a user goes and changes the code in the Git repository, it creates a container image, and that container image is pushed to the container registry which is updated into a config updater. Once you create a pull request to merge to a different branch, that is when it deploys to the concerned branch, and then it tests whether these are good.&nbsp;</p>



<p>This way every time you raise a pull request you know what you are merging and that the pull request is being reviewed by somebody based on the success criteria of that particular automated branch pipeline. This is how GitOps helps teams in solving the automation problem.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-gitops-is-becoming-important-for-developers/">Why GitOps Is Becoming Important For Developers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-gitops-is-becoming-important-for-developers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Programming Languages on the Rise: Swift, Go, and… Perl?</title>
		<link>https://www.aiuniverse.xyz/programming-languages-on-the-rise-swift-go-and-perl/</link>
					<comments>https://www.aiuniverse.xyz/programming-languages-on-the-rise-swift-go-and-perl/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 12 Aug 2020 07:40:45 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[Programming Languages]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10830</guid>

					<description><![CDATA[<p>Source: insights.dice.com The latest edition of the TIOBE Index, which attempts to gauge the popularity of the world’s programming languages, reveals something fascinating: Go, Swift, Perl and R have gained substantial ground over the past year. But can any of them challenge the older, more-established languages (such as C, Java, and Python) for TIOBE’s top slots? <a class="read-more-link" href="https://www.aiuniverse.xyz/programming-languages-on-the-rise-swift-go-and-perl/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/programming-languages-on-the-rise-swift-go-and-perl/">Programming Languages on the Rise: Swift, Go, and… Perl?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: insights.dice.com</p>



<p>The latest edition of the TIOBE Index, which attempts to gauge the popularity of the world’s programming languages, reveals something fascinating: Go, Swift, Perl and R have gained substantial ground over the past year. But can any of them challenge the older, more-established languages (such as C, Java, and Python) for TIOBE’s top slots?</p>



<p>It’s worth nothing that Go, Swift, and R were among the languages that developers generally wanted to learn next, according to HackerRank’s 2020 Developer Skills Report (which surveyed 116,000 developers worldwide). Go also ranked highly on IEEE Spectrum’s recent list of the top programming languages for the web. </p>



<p>The TIOBE Index just reinforces that these are languages to watch. “The programming language R continues to rise and is on schedule to become TIOBE’s programming language of the year 2020,” Paul Jansen, CEO of TIOBE Software, wrote in a note accompanying the data. “It is also interesting to follow the on-going fight for position #10 in the TIOBE index between Go, Swift and SQL. Swift lost 2 positions this month (from #10 to #12). SQL took over and is back in the top 10 this time. Also worth noting is Groovy‘s re-entrance in the TIOBE index top 20 at the expense of Scratch and the fact that Hack entered the top 50 at position #44.” </p>



<p>To generate its rankings, TIOBE crunches data from various aggregators and search engines, including Google, Wikipedia, YouTube, and Amazon. In order for a language to rank, it must be Turing complete, have its own Wikipedia entry, and earn more than 5,000 hits for +”&lt;language> programming” on Google. Critics complain that TIOBE more accurately measures “buzz” than actual language usage, but it’s nonetheless a useful ranking for determining what’s on developers’ (and other technologists’) minds when it comes to programming languages. </p>



<p>R’s rise neatly counters the general narrative that the language, which is mainly used by researchers and data scientists for data-crunching, is slowly imploding. In July, R jumped to eighth place on TIOBE’s list, where it stayed through this month. “There are 2 trends that might boost the R language: 1) the days of commercial statistical languages and packages such as SAS, Stata and SPSS are over,” TIOBE wrote in a note accompanying the data at the time. “Universities and research institutes embrace Python and R for their [statistical] analyses, 2) lots of statistics and data mining need to be done to find a vaccine for the COVID-19 virus.”</p>



<p>TIOBE has also claimed in the past that Perl’s future is in serious doubt, yet this latest update suggests a core of developers aren’t giving the language up. Perhaps the Perl legacy codebase is behind this endurance. Go and Swift, meanwhile, are pushed by Google and Apple developers, respectively, which gives them a significant leg up over other languages. It might be some time, though, before the dominance of Java, Python, and C are seriously threatened. </p>
<p>The post <a href="https://www.aiuniverse.xyz/programming-languages-on-the-rise-swift-go-and-perl/">Programming Languages on the Rise: Swift, Go, and… Perl?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/programming-languages-on-the-rise-swift-go-and-perl/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Monolithic versus Microservice architecture</title>
		<link>https://www.aiuniverse.xyz/monolithic-versus-microservice-architecture/</link>
					<comments>https://www.aiuniverse.xyz/monolithic-versus-microservice-architecture/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 24 Jul 2020 06:59:07 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[IDE]]></category>
		<category><![CDATA[Loosely-coupled]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Monolithic Architecture]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10441</guid>

					<description><![CDATA[<p>Source: enterprisetimes.co.uk A monolithic architecture is one built from a single piece of material; therefore, a monolithic application has a single code base with multiple modules that are divided into business features and technical features. Microservices is an architecture used to separate a monolithic application into several independent services. A microservice application consists of a <a class="read-more-link" href="https://www.aiuniverse.xyz/monolithic-versus-microservice-architecture/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/monolithic-versus-microservice-architecture/">Monolithic versus Microservice architecture</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterprisetimes.co.uk</p>



<p>A monolithic architecture is one built from a single piece of material; therefore, a monolithic application has a single code base with multiple modules that are divided into business features and technical features.</p>



<p>Microservices is an architecture used to separate a monolithic application into several independent services. A microservice application consists of a collection of services. Each service can have multiple runtime instances and be deployed independently.</p>



<p>Both monolithic and microservice architectures have several advantages and disadvantages when compared to each other, particularly when it comes to operational overhead requirements. Let us look at the comparison between monolithic vs microservices.</p>



<h3 class="wp-block-heading">Advantages of monolithic architecture.</h3>



<p>One codebase: Monolithic architecture is built as one large system and is usually one code base services with their codebase within an application.</p>



<ul class="wp-block-list"><li><strong>Application integration:</strong>&nbsp;The idea of application integration in monolithic architecture has made it easier to use the system. Monolithic architecture integrates well with services such as messaging and Rest API.</li><li><strong>Accessibility:</strong>&nbsp;Monolithic apps can easily handle a range system that makes one to access it easily such as monitoring performance, logging, and configuration management.</li><li><strong>Technology:</strong>&nbsp;A monolithic application must use the same technology stack throughout.</li><li><strong>Memory:</strong>&nbsp;Components in a monolith typically comes with a performance advantage, the reason behind it is that its shared memory access is faster than even ICP.</li></ul>



<h3 class="wp-block-heading">Disadvantages of monolithic architecture.</h3>



<ul class="wp-block-list"><li>An error in any of the modules in a monolithic kernel can bring the entire application down.</li><li>Changes to the technology stack are sometimes expensive although this will vary with many reasons, both in terms of the time and cost is involved.</li><li>Issues with security may occur because there is no isolation among various servers present in the program.</li><li>The large codebase of a monolithic application can be hard to understand.</li></ul>



<h3 class="wp-block-heading">What are microservices?</h3>



<ul class="wp-block-list"><li>Microservices are a way of breaking large software projects into loosely coupled modules. They communicate with one another through application programming interfaces.</li><li>Advantages of microservices architecture.</li><li>Microservices can be more beneficial for complex and evolving applications. It offers practical solutions for handling a complicated system of different functions and services within one application.</li><li>The user only needs to scale certain components of the application, which optimizes resource usage to scale a microservices-based application.</li><li>Microservices components are loosely coupled, so they are not interdependent and can be tested individually.</li></ul>



<h3 class="wp-block-heading">Disadvantages of microservices.</h3>



<ul class="wp-block-list"><li>As the application grows, so does the code base and this may overload your development environment every moment it loads the application. In turn it reduces the developer productivity.</li><li>Sometimes the application may be packaged in one EAR or WAR thus when one wants to change the technology stack of the application might be a challenge.</li><li>On the other hand, if any single application function fails then the entire application goes down. In other scenarios. if a particular function starts consuming more processing power then the entire application, performance can become compromised.</li></ul>



<h3 class="wp-block-heading">Conclusion</h3>



<p>The article has illustrated the difference between monolithic vs microservices architecture. Having said that, you will understand a microservice is basically used to separate a monolithic application into several independent services.</p>
<p>The post <a href="https://www.aiuniverse.xyz/monolithic-versus-microservice-architecture/">Monolithic versus Microservice architecture</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/monolithic-versus-microservice-architecture/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Consumer IoT – European Commission initiates inquiry into the consumer Internet of Things secto</title>
		<link>https://www.aiuniverse.xyz/consumer-iot-european-commission-initiates-inquiry-into-the-consumer-internet-of-things-secto/</link>
					<comments>https://www.aiuniverse.xyz/consumer-iot-european-commission-initiates-inquiry-into-the-consumer-internet-of-things-secto/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 22 Jul 2020 07:52:06 +0000</pubDate>
				<category><![CDATA[Internet of things]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[European Commission]]></category>
		<category><![CDATA[Internet of Things]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10382</guid>

					<description><![CDATA[<p>Source: jdsupra.com The European Commission (“Commission”) has launched an antitrust sector inquiry into the Internet of Things (“IoT”) sector for consumer-related products and services within the European Union. The Commission is looking to develop a better understanding of how this fast-moving sector works and some of the potential issues that may arise from a competition <a class="read-more-link" href="https://www.aiuniverse.xyz/consumer-iot-european-commission-initiates-inquiry-into-the-consumer-internet-of-things-secto/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/consumer-iot-european-commission-initiates-inquiry-into-the-consumer-internet-of-things-secto/">Consumer IoT – European Commission initiates inquiry into the consumer Internet of Things secto</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: jdsupra.com</p>



<p>The European Commission (“Commission”) has launched an antitrust sector inquiry into the Internet of Things (“IoT”) sector for consumer-related products and services within the European Union. The Commission is looking to develop a better understanding of how this fast-moving sector works and some of the potential issues that may arise from a competition law perspective. The regulator intends imminently to send requests for information to a range of players in this sector and already plans to publish a preliminary report on its findings in the spring of 2021. As such, the inquiry offers companies in the IoT sector an opportunity to steer the Commission’s approach to competition in this area.</p>



<p>On 16 July 2020, the Commission announced that it has launched an antitrust sector inquiry into the consumer IoT sector – a sector offering consumer-related products and services that allow users to control their surroundings through the internet – for example, through a voice assistant or a mobile device. A sector inquiry, based on Article 17 of Regulation 1/2003, is an investigation into a sector(s) of the economy that the Commission carries out when it has reason to believe that the sector in question is potentially not functioning properly from a competition law perspective. The IoT review follows other sector inquiries conducted in recent years by the Commission (in financial services, energy, pharmaceuticals and, most recently, in e-commerce).</p>



<p><strong>What is the focus of the inquiry?</strong></p>



<p>Although the IoT sector is relatively new, in certain instances it is characterised by strong network effects and economies of scale. As such, the Commission appears to be concerned that certain practices by market players have the potential structurally to distort competition. In particular, the Commission is looking into whether practices involving restrictions on data access and interoperability, forms of self-preferencing and the use of proprietary standards (among companies active in this space) might lead to less competitive markets. In addition, the Commission may be worried that the IoT space is at a so-called “tipping point” (ie. whereby an otherwise competitive market is at risk of being consolidated among fewer, stronger players and become irreversibly monopolized).</p>



<p>The inquiry will focus on products and services (and companies producing/providing them) such as wearable devices (smart watches or fitness trackers, among others) and smart home consumer devices (such as fridges, washing machines, smart TVs, smart speakers and lighting systems). These often rely on significant amounts of user data creating a risk, according to the Commission, that data-rich companies will be able to control parts of the digital market. The Commission will also gather more information on the services available via smart devices, such as music and video streaming as well as voice assistants, and will investigate whether such services limit the options available to customers.</p>



<p>Margrethe Vestager, Executive Vice-President of the Commission and Commissioner for Competition, confirmed that the Commission will be particularly interested in:</p>



<ul class="wp-block-list"><li>The products sold and how the markets for those products work;</li><li>How data is used, collected and monetised; and</li><li>How products and services in this sector work together, including potential problems with making them interoperable.</li></ul>



<p><strong>Why this is significant for companies active in the IoT space?</strong></p>



<p>The information gained from the inquiry will assist the Commission in understanding the nature, prevalence and effects of potential anti-competitive conduct, if any, in the IoT space. Should the Commission identify specific competition concerns as a result of the inquiry, it could open antitrust investigations (as it has in the past) to ensure that market players comply with Articles 101 and 102 of the Treaty on the Functioning of the European Union (relating to the prohibition of restrictive business practices and abuse of a dominant position respectively).</p>



<p><strong>What’s next?</strong></p>



<p>The Commission has announced that it will send requests for information to over 400 companies within the IoT sector for consumer-related products and services in Europe, Asia and America. Companies of particular interest to the Commission are thought to be smart device manufacturers, software developers and related service providers.</p>



<p>The Commission plans to publish a preliminary report on the replies for consultation in the spring of 2021 and a final report is expected to follow in the summer of 2022. Compared to other sector inquiries which usually take a number of years to complete, the Commission is pursuing a much tighter timeline than usual.</p>



<p>As outlined above, the Commission already seems to have a narrow idea of the issues and concerns it is looking to identify and potentially further investigate. By shortly sending requests for information to companies active at all levels of the IoT value chain, the Commission is looking for information to help it shape its approach to competition. This means that companies which are active in the IoT sector can play an important role in informing and steering the Commission’s approach.</p>
<p>The post <a href="https://www.aiuniverse.xyz/consumer-iot-european-commission-initiates-inquiry-into-the-consumer-internet-of-things-secto/">Consumer IoT – European Commission initiates inquiry into the consumer Internet of Things secto</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/consumer-iot-european-commission-initiates-inquiry-into-the-consumer-internet-of-things-secto/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
