<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>eim Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/eim/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/eim/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 21 Feb 2020 05:35:51 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>From Service-Oriented Architecture to Microservices</title>
		<link>https://www.aiuniverse.xyz/from-service-oriented-architecture-to-microservices/</link>
					<comments>https://www.aiuniverse.xyz/from-service-oriented-architecture-to-microservices/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 21 Feb 2020 05:32:49 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Digital Workplace]]></category>
		<category><![CDATA[eim]]></category>
		<category><![CDATA[geetika tandon]]></category>
		<category><![CDATA[information management]]></category>
		<category><![CDATA[SERVICE ORIENTED ARCHITECTURE]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6945</guid>

					<description><![CDATA[<p>Source: aiuniverse.xyz Legacy systems still form the backbone of many enterprises. Yet as the demand for efficiency, scale, reliability and agility grow larger, we&#8217;ve seen an evolution <a class="read-more-link" href="https://www.aiuniverse.xyz/from-service-oriented-architecture-to-microservices/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/from-service-oriented-architecture-to-microservices/">From Service-Oriented Architecture to Microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: aiuniverse.xyz</p>



<p>Legacy systems still form the backbone of many enterprises. Yet as the demand for efficiency, scale, reliability and agility grow larger, we&#8217;ve seen an evolution in these underlying technologies to meet those needs. Let&#8217;s explore some of these technologies, their history and their evolution to see why such a change was inevitable. Because in today&#8217;s digital economy, organizations need to drive at a very different speed than was previously acceptable and embrace change in their competitive landscape and products.&nbsp;<strong>Service-Oriented Architecture to the Rescue</strong></p>



<p>Legacy systems, which are the mainstay of many enterprises, weren&#8217;t developed to support the implementation and adoption of new technologies and growing economies working at breakneck speed. Consequently, as the number of digital transformation initiatives increases and the speed of expected delivery intensifies, IT leaders become overwhelmed by the sheer number of requests across the systems. Moreover, existing legacy interfaces, developed in a world of daily batch calls, are not fit for purpose for today’s digital channels that require real-time data.&nbsp;</p>



<p>Enter service-oriented architecture (SOA), with its promise of speeding up project delivery, increasing IT agility and scalability and reduce integration costs. Gartner analyst Roy Schulte defined service-oriented architecture in 1996 as follows:</p>



<p>“<em>A service-oriented architecture is a style of multi-tier computing that helps organizations share logic and data among multiple applications and usage modes</em>.”</p>



<p>The goal of SOA is to create independent services that represent a single business activity with a specified outcome, that is self-contained and can be consumed by others irrespective of its implementation details based on the exposed interface. However, as SOA was adopted by organizations across the world, SOA governance requirements, large scale ESB integrations, and a need for large service registries made the implementations heavy and monolithic.&nbsp;&nbsp;</p>



<p>The original promise of SOA was to speed up project delivery, increase agility and reduce costs. However, SOA adopters found that it increased complexity and introduced bottlenecks. Although teams were able to create faster connections, they also needed to maintain a large ESB implementation which slowed down time to production and didn&#8217;t provide a reasonable return on investment.</p>



<p>Microservices are in fact, the next step in the evolution of service-oriented architectures. A microservice is:</p>



<ul class="wp-block-list"><li><strong>Functionally Scoped:</strong>&nbsp;Microservices design is based on services and applications that accomplish one narrowly defined business function. A microservice need not necessarily be small, its size depends upon the complexity of the business function it accomplishes. However, it will be smaller than an application that contains its functionality as well as other business functions.</li><li><strong>Autonomous:&nbsp;</strong>As essential element of a microservice is that it should be autonomous. That is, it should be able to function on its own and without the need for other services. Other services might be layered on top of it (such as a service that handles user authentication), but it performs its business function independently. It can be developed and tested independently, and it can be deployed independently.&nbsp;</li></ul>



<p>More than anything else, a microservices design forces us to rethink the way we plan projects and lead teams. It affects how we think of deliverables, application lifecycles and time to production. It is amenable to a DevSecOps based approach which is founded in amalgamated scrum teams with a focus on automation, speed and agility. In some ways it is akin to the change in thinking that came with assembly line production and Lean philosophies that revolutionized the manufacturing industry in the early 20<sup>th</sup>&nbsp;century. Some of the benefits of using a microservice-based architecture are:</p>



<p><strong>Speed</strong><strong>&nbsp;—&nbsp;</strong>Since a microservice is an autonomous unit, independent scrum teams can develop, test and put it into production irrespective of other parts. Each new unit provides a critical and unique functionality but no one single unit prevents the whole from functioning. Hence services can be created and deployed to production in small sized scrum teams.&nbsp;</p>



<p><strong>Agility</strong>&nbsp;<strong>—</strong>&nbsp;An agile environment succeeds on small units which can be built in scrum teams of six to eight, tested and added to the release pipeline. Microservices not only just work, but thrive in an agile environment and promote quick, faster releases of independent units that can be promoted to production as autonomous units.&nbsp;</p>



<p><strong>Flexibility —</strong>&nbsp;The autonomy and lack of dependencies in microservices provides a number of advantages: teams are able to use the language and tools that best fit the problem, they can test, build and deploy functionality without being impeded by other teams and services, and the code base each team must manage is considerably smaller and simpler. They provide the flexibility to try out a new technology stack on an individual service as needed. There won’t be as many dependency concerns and rolling back changes becomes much easier. With less code in play, there is more flexibility.</p>



<p>And last but not least,&nbsp;<strong>Simplicity —</strong>&nbsp;Microservices provide us with the smallest productivity unit in a complex ecosystem of IT services within any organization — like the cells with the complex human body. It forces organizations to think of their simplest business function and smallest unit of work. In addition, rather than working on an element of a project that is centrally managed, each team working within the context of a microservices architecture is free to innovate within the context of a simple business function, promoting innovation and risk taking which does not affect the entire organization.</p>



<p>Thinking about and designing our applications in terms of small independent units is the first step towards building a modern infrastructure that is nimble, agile and scalable. While there will still be technical debt as teams balance timelines and design, it easier to pay down that debt. In fact, as organizations evolve and modify their requirements, IT departments can work alongside them in replacing services rather than maintaining them.</p>



<h4 class="wp-block-heading">About the Author</h4>



<p>Geetika Tandon is a senior director at Booz Allen Hamilton, a management and technology consulting firm. She was born in Delhi, India, holds a Bachelors in architecture from Delhi University, a Masters in architecture from the University of Southern California and a Masters in computer science from the University of California Santa Barbara.</p>
<p>The post <a href="https://www.aiuniverse.xyz/from-service-oriented-architecture-to-microservices/">From Service-Oriented Architecture to Microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/from-service-oriented-architecture-to-microservices/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Executive Shines Light on the Path to AI in the Enterprise</title>
		<link>https://www.aiuniverse.xyz/google-executive-shines-light-on-the-path-to-ai-in-the-enterprise/</link>
					<comments>https://www.aiuniverse.xyz/google-executive-shines-light-on-the-path-to-ai-in-the-enterprise/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 25 Sep 2019 11:25:13 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Digital Workplace]]></category>
		<category><![CDATA[eim]]></category>
		<category><![CDATA[emtech]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4579</guid>

					<description><![CDATA[<p>Source: cmswire.com Organizations deploying artificial intelligence (AI) in the enterprise should start with a small use case that solves a specific business problem and ties back to <a class="read-more-link" href="https://www.aiuniverse.xyz/google-executive-shines-light-on-the-path-to-ai-in-the-enterprise/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-executive-shines-light-on-the-path-to-ai-in-the-enterprise/">Google Executive Shines Light on the Path to AI in the Enterprise</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: cmswire.com</p>



<p>Organizations deploying artificial intelligence (AI) in the enterprise should start with a small use case that solves a specific business problem and ties back to the organization’s core values, according to a Google AI executive.</p>



<p>Tracy Frey, director of strategy for Google Cloud AI, shared these thoughts with the crowd at the MIT Technology Review’s EmTech conference last week at the Massachusetts Institute of Technology (MIT) in Cambridge, Mass. </p>



<p>“What I tell companies, and what I think is really important about this space, is that the most important thing is to start with a business problem,” Frey said. “Identify what the problem is that you&#8217;re trying to solve.”</p>



<h4 class="wp-block-heading">You’re Google, You Tell Us&nbsp;</h4>



<p>Frey gave attendees an inside look on how the search giant is living by its promise to be an AI-first company. But she also discussed problems she sees with organizations that want to leverage AI in the enterprise. Namely, it’s at the starting gate; too often, they start astray.</p>



<p>“There&#8217;s an extraordinary amount of hype about AI in enterprises around the world,” Frey said. “And a lot of the experience that we have in Google Cloud AI is that companies come to us and they say, ‘We really, really, really want AI. And we say, ‘Great, we would love to help you. Tell us what problem you&#8217;re trying to solve, so that we know what products we can help you deploy.’ And usually the next thing that companies say is, ‘I don&#8217;t know. You’re Google. You tell us what we should be doing.’”</p>



<h4 class="wp-block-heading">Do You Know Your Organization’s Core Values?</h4>



<p>Naturally, pinning an entire project on a vendor is not healthy. AI projects should begin with knowing your organization’s core values and “cultural pillars,” according to Frey. Ensure you spend time identifying those, and understand how you want your company to operate.&nbsp;</p>



<p>“Because if you don&#8217;t start there, then if you start deploying things like AI and new technologies, you run that risk of everything being called into question,” Frey said. “Build your own principles, or whatever it is the process that speaks to you that feels like the right thing for your organization, and then identify one or a set of business problems. And start working with how AI can solve those business problems.”</p>



<h4 class="wp-block-heading">Talent, Change Management</h4>



<p>Frey likely recognizes she’s blessed to work in a company loaded with data scientists across the world and one that has its own AI Residency Program. She also recognizes that deploying AI in the enterprise is not only about technology and strategy but also having talent and change management practices. Data scientists are out there, but it&#8217;s not exactly easy — nor cheap — to get good ones into your front door. IBM predicted an increased demand for 700,000 more data scientists by 2020 in the US, but talented data scientists &#8220;remain hard to find and expensive,&#8221; according to a report from IDG. </p>



<p>“AI has been around for a long time, but for the most part, enterprises that have been able to adopt AI are doing so because they have the ability to hire in top talent,” Frey said. “They are going to be likely only working on things that are really unique and customized to them and built in house and completely proprietary.”</p>



<p>That’s partly why it’s a “giant leap of faith” to invest in AI. It’s also “easy to underestimate the amount of change management that organizations should invest in when they are undertaking any AI project.” With the large volume of the unknown in the space, organizations without a change-management program will have a range of feelings across their organization with having AI part of their day-to-day work life. </p>



<h4 class="wp-block-heading">AI Needs to Be Built on Trust</h4>



<p>No discussion of AI comes without ethics. Google has its own AI Principles manifest “because we fundamentally believe that you cannot have successful AI without being responsible and careful,&#8221; Frey said.</p>



<p>According to a Capgemini report, executives in nine out of 10 organizations believe that ethical issues have resulted from the use of AI systems over the last two to three years. Examples include: collection of personal patient data without consent in healthcare; over-reliance on machine-led decisions without disclosure in banking and insurance.</p>



<p>Trust needs to be the foundation of any new type of technology, Frey said. Without it, there’s a “great risk of stopping progress and making this incredibly beneficial technology available.”</p>



<h4 class="wp-block-heading">Playing AI Defense</h4>



<p>No matter how organizations feel about how AI can advance enterprises, such machine learning deployments can leave organizations vulnerable without playing sound AI defense, according to Tim Grance, senior computer scientist at the National Institute of Standards and Technology (NIST). </p>



<p>NIST, a division of the US Department of Commerce, has its own take on AI technology development. Last month it released a plan for prioritizing federal agency engagement in the development of standards for AI. The plan recommends that the federal government “commit to deeper, consistent, long-term engagement” in activities to help the US &#8220;speed the pace of reliable, robust and trustworthy AI technology development.&#8221; </p>



<p>Organizations must be aware of potential vulnerabilities AI exposes in their enterprise and deploy an “attack and defend mentality.” Recognize, though, that once you fix something, people are going to try to do something else, according to Grance. “If you&#8217;re betting the enterprise on some particular solution especially around AI, you want to address those questions of can people attack the data on which the system is training?” Grance said. “Can they attack our assumptions? Does it give us a real business advantage that we can maintain?”</p>



<p>Grance recognizes high-quality data, having the right people in place, knowing your business problem and executive buy-in as some key pillars in building a sound machine learning strategy.</p>



<p>“Everybody thinks about bias and can you protect the system so there are not some unintended side effects that would cause problems,” Grance said. “AI is just another cold-hearted, hard business decision you have to make. Is putting in this much worth it?”</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-executive-shines-light-on-the-path-to-ai-in-the-enterprise/">Google Executive Shines Light on the Path to AI in the Enterprise</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-executive-shines-light-on-the-path-to-ai-in-the-enterprise/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
