<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>MANUFACTURING Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/manufacturing/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/manufacturing/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 06 Apr 2021 06:11:49 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>ARTIFICIAL INTELLIGENCE IN MANUFACTURING: TIME TO SCALE AND TIME TO ACCURACY</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-in-manufacturing-time-to-scale-and-time-to-accuracy/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-in-manufacturing-time-to-scale-and-time-to-accuracy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 06 Apr 2021 06:11:48 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ACCURACY]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<category><![CDATA[scale]]></category>
		<category><![CDATA[TIME]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13970</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Asset-intensive organizations are pursuing digital transformation to attain operational excellence, improve KPIs, and solve concrete issues in the production and supporting process areas. AI-based <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-in-manufacturing-time-to-scale-and-time-to-accuracy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-in-manufacturing-time-to-scale-and-time-to-accuracy/">ARTIFICIAL INTELLIGENCE IN MANUFACTURING: TIME TO SCALE AND TIME TO ACCURACY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<p>Asset-intensive organizations are pursuing digital transformation to attain operational excellence, improve KPIs, and solve concrete issues in the production and supporting process areas.</p>



<p>AI-based prediction models are particularly useful tools that can be deployed in complex production environments. Compared to common analytical tools, prediction models can more easily amplify correlations between different parameters in complicated production environments that generate large volumes of structured or unstructured data.</p>



<p>My regular talks with executives of production-intensive organizations indicate that AI use is steadily rising. This is in line with IDC’s forecast that 70% of G2000 companies will use AI to develop guidance and insights for risk-based operational decision making by 2026. The figure is less than 5% today.</p>



<p>Please — do not be distracted by visions of a powerful “central brain” that can manage the entire organization. Typical, everyday use cases mostly leverage cognitive AI embedded in planning and scheduling tools. It is also used in quality- and maintenance-predictive models.</p>



<p>What delivers immediate value — and very reasonable ROI — are solutions that leverage AI-powered engines that recognize images and sound, numeric values from vibrations, temperatures, and processes. We currently see most of these use cases in pilots or isolated implementations.</p>



<h3 class="wp-block-heading"><strong>Customized vs. Standardized AI-Powered Solutions</strong></h3>



<p>From a scalability perspective, there are two main groups of digital projects that leverage AI in production areas. Each delivers value. However, they each offer different time to scale and time to accuracy.</p>



<p><strong>Customized Solutions:</strong>&nbsp;AI-powered solutions based on complex learning processes are highly customized. They may leverage neural networks and deep learning for image recognition, or supervised learning to build predictive models.</p>



<p>It takes a relatively long time to fine-tune a solution to provide 90% accuracy. These are usually predictive solutions that model the behavior of material as it goes through a production process (e.g., breakage predictions for a paper belt or steel slab).</p>



<p>Gülsün Akhisaroglu, the global IT director for Hayat Holding, a hygiene and paper products manufacturer, told me: “It took us almost two years to achieve 90% accuracy.”</p>



<p>This sounds like industrial scalability could be a real challenge. However, for this project, the auto-learning mode was applied — substantially accelerating progress toward 99% accuracy.</p>



<p>Even in highly customized models, it may be difficult to find the root causes of problems. To resolve such issues, analysts and material engineers must use intelligent solutions that show when, how, and why problems occurred.</p>



<p>Said CIO Akhisaroglu: “We decided to evaluate deep learning algorithms to discover any meaningful patterns. We selected eight promising algorithms out of the 92 we analyzed.”</p>



<p>Engineers, developers, and data analysts have several digital and hardware tools and solutions at their disposal that are based on contemporary technologies. However, in many cases, these tools and solutions are inadequate. Production environments can be vastly different.</p>



<p>It is not a matter of simply capturing the right parameters and signals to improve the quality of outputs and the final accuracy of the model. Working conditions may vary as well. Different methods of maintaining, adjusting, and operating production assets may significantly impact the quality of model outputs. The journey toward perfection may be winding and rocky.</p>



<p>ROI, of course, must be extremely compelling. My experience tells me that fast solution prototyping is essential. A model’s functionality should be tested quickly, in a maximum of 3–4 weeks. Lead time between the start of development and deployment of a solution (getting accurate and reliable outputs) can take months due to the learning process and model adjustment.</p>



<p>This is why the ideal production type for this kind of deployment is a highly asset-intensive environment — where a single stoppage can cause millions of dollars in damage.</p>



<p><strong>Standardized Solutions:</strong>&nbsp;These are refined, highly scalable solutions based mostly on image-recognition principles. The accuracy of the final output strongly depends on the number of anomaly samples: The more samples, the more accurate the model.</p>



<p>For basic quality control tasks, it may take 4–6 not-OK (“NOK”) samples to teach a system run via a camera positioned on the production line. This is fully sufficient in high-speed production. Theoretically, such a solution may even provide 99.99% accuracy. However, real life shows this rather theoretical value is reached only during simple quality inspection tasks.</p>



<p>Size and surface integrity play a big role in whether such solutions can be effectively utilized. The smaller and simpler it is, the more effective the control outputs.</p>



<p>Solutions that leverage AI-powered tracking and analysis of each assembly step, including cycle-time analysis, seem very promising. Such solutions can identify production anomalies and bottlenecks, improving throughput by tens of percentages.</p>



<p>They can also significantly speed the discovery of quality issues — in some cases reducing discovery time to minutes. Standardized solutions may easily achieve an ROI target of 1–2 years. Time to Scale and Time to Accuracy may be as little as days — or even hours.</p>



<h3 class="wp-block-heading"><strong>Don’t Waste Time&nbsp;</strong><strong>—</strong><strong>&nbsp;Just Start!</strong></h3>



<p>Enterprises should have realistic expectations about leveraging AI in production, quality control, and maintenance. AI is not a miracle drug that solves every emergency. You can forget about a “supreme power” that may turn against you, like in some “war of the robots” novel.</p>



<p>AI can, however, provide a solid range of use cases. Your focus should be on what can be achieved by AI-powered solutions — and how much effort and money you can invest in them.</p>



<p>In many situations, the benefit is not only obvious KPIs (e.g., production line availability or overall equipment efficiency) — it is also secondary impacts that improve sustainability and quality, resolve problems in the production process, and boost customer satisfaction.</p>



<p>As always, the creation of digital silos must be avoided. To unlock the full power of data, AI-powered models must be integrated with enterprise systems like manufacturing executive systems, ERP, and advanced analytics tools. Data can be analyzed in more than one area and contextualized. Different analytics solutions can be combined to squeeze out unexpected insights.</p>



<p>Start now!</p>



<p>As you push forward, however, do not underestimate organizational, technology, and management support. CIO Akhisaroglu offers this final insight: “Looking back, we lost a lot of time in starting these pilot projects. We should have started earlier and been more proactive in collecting data from all available problem-related resources. We faced many challenges with servers, databases, and processes, but it is clear that we worked harmoniously to meet our needs quickly and effectively.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-in-manufacturing-time-to-scale-and-time-to-accuracy/">ARTIFICIAL INTELLIGENCE IN MANUFACTURING: TIME TO SCALE AND TIME TO ACCURACY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-in-manufacturing-time-to-scale-and-time-to-accuracy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The rise of advanced robotics in industrial manufacturing</title>
		<link>https://www.aiuniverse.xyz/the-rise-of-advanced-robotics-in-industrial-manufacturing/</link>
					<comments>https://www.aiuniverse.xyz/the-rise-of-advanced-robotics-in-industrial-manufacturing/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 21 Sep 2020 07:35:50 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[5G connectivity]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11696</guid>

					<description><![CDATA[<p>Source: manufacturingglobal.com Customization and flexibility are two of the hottest words in industrial manufacturing right now. Customers want something made just for them, whether it is a <a class="read-more-link" href="https://www.aiuniverse.xyz/the-rise-of-advanced-robotics-in-industrial-manufacturing/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-rise-of-advanced-robotics-in-industrial-manufacturing/">The rise of advanced robotics in industrial manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: manufacturingglobal.com</p>



<p>Customization and flexibility are two of the hottest words in industrial manufacturing right now. Customers want something made just for them, whether it is a personalized aftershave with their name on the bottle, a vehicle with all the features they need and none they don’t, or a new phone with the latest radio antenna for 5G connectivity. All this customization leads to one conclusion – manufacturing is moving towards high-mix production and making millions of different products in very small lots.&nbsp;</p>



<p>At the same time, many products made today are far too complicated for established automation technologies alone, forcing manufacturers to augment traditional robotics with manual assembly by human laborers. People are valued for their ability to understand and account for changes in a process very quickly. But what if this flexibility were included in automated processes?</p>



<p>A flexible and automated (even autonomous) production system is the Holy Grail for many manufacturers wishing to overcome the challenge of growing product complexity and simultaneously meet demands for greater customization. The ability to rapidly switch production from one product to another will be a defining feature of businesses on the path to lot sizes of one and the highly customizable products of tomorrow.</p>



<p>Small lot sizes are not inherently a problem, but current production processes cannot easily accommodate this without large investments in an increasingly complex infrastructure. To avoid this problem of exponential investments, which may or may not solve the problem, many businesses are looking for a more flexible approach to production. How can manufacturers make multiple products efficiently with minimal changes to the production floor between products?</p>



<p>Advanced robotics is the answer, and many companies are already on the path to adoption.&nbsp;</p>



<p><strong>The Advanced Robotics Journey</strong></p>



<p>Many factory floors rely on conveyor belt networks to transport everything from raw material to final products. But these networks were not designed to handle thousands of different products going to the constantly changing locations needed in a multi-product manufacturing process. What if a conveyor system could change? Perhaps change paths to avoid congested areas in a factory? Or change destinations to deliver a work piece to the optimal machining station?&nbsp;</p>



<p>These are the kinds of problems advanced robotics solves with the use of automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) in tandem with an advanced software, solutions and application development platform.</p>



<p>Typically, the goal of using the robots is to deliver material from point A to point B with relative ease. But it is not as simple as just introducing AGVs or AMRs to a facility. Much of the investment value comes from the optimization and coordination of the advanced robotic technologies. In our experience, helping companies adopt advanced robotics into their manufacturing processes is a four-stage journey.&nbsp;</p>



<p>Stage one, or the&nbsp;<strong>Entrant</strong>&nbsp;stage, is defined by the use of fixed automation robotics or similar technologies where most operations are programmed manually. All process planning is done by a human, possibly with the aid of software, and the tasks are then assigned to specific robots to function at specific locations and times. This approach works well when producing high volumes, when changes or modifications to a production line are kept to a minimum. Since every action of a robot is explicitly specified at this stage, the robot must be taken off-line when changes are required and manually reprogrammed. This negatively impacts production times.&nbsp;</p>



<p>The second stage is for&nbsp;<strong>Veterans</strong>&nbsp;and is the most common stage today for industrial manufacturers. It is characterized by the use of the digital twin for complete system validation and for building control algorithms for the entire production line. Utilizing a digital twin of manufacturing provides deep insights on how to proceed in later stages of the journey by enabling the simulation of the entire facility. Significant productivity improvements at this stage can be achieved from concurrently updating multiple robots running off the same programmable logic controllers, reducing downtime on the production floor.</p>



<p>Progressing into the third or&nbsp;<strong>Pioneer</strong>&nbsp;stage, manufacturers can start automating more of the production process. Built on top of the insights learned from the digital twin and augmented with feedback from IoT sensors, task-based programming can be implemented for robots throughout the facility. This greatly reduces the time needed to program robots to accommodate a design or process change. Simple commands can be used to automatically adjust the robot based on a closed-loop calibration between the physical environment and the digital twin.</p>



<p>The final stage, called the&nbsp;<strong>Visionary</strong>&nbsp;stage, is where advanced robotics initiatives become highly autonomous, delivering near complete autonomy of the robots. This is also where AGVs and AMRs become highly effective, replacing static conveyor belts and a linear process path with advanced, mobile robotics. Now production changes can almost be as simple as inputting the number of products required and how many variations are needed. From that information, the system will determine the optimal path of how to produce the desired lot.&nbsp;</p>



<p>Software now determines how many parts are needed from storage room B, for instance, or what machining station will be able to ramp up the fastest to produce the lot. And, if the primary choice is down for maintenance, what is the next best option. The limits of this do not end at the factory walls. The benefits extend beyond to include suppliers and distributers, helping produce the most efficient workload for the factory.</p>



<p>The Visionary stage is the optimal point for implementing AGVs and AMRs, due to complete factory simulations. But advanced robotics can be brought in earlier stages to execute tasks simpler than production scheduling. Some companies have adopted AGVs and AMRS as semi-autonomous picking carts for warehouses, where the robot follows and assists a human worker.&nbsp;</p>



<p>Depending on how the factory is run, there are nearly infinite ways to optimize the facility. That is why the investment in the comprehensive digital twin is so important for this journey. It allows deeper insights of how a factory is running, helping to confidently invest in the future of the business. Advanced robotics is part of Siemens’ Xcelerator portfolio of software, solutions and application development platform where today meets tomorrow for industrial manufacturing.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-rise-of-advanced-robotics-in-industrial-manufacturing/">The rise of advanced robotics in industrial manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-rise-of-advanced-robotics-in-industrial-manufacturing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>GLOBAL HE-NE LASER MARKET 2020 VARIOUS MANUFACTURING INDUSTRIES</title>
		<link>https://www.aiuniverse.xyz/global-he-ne-laser-market-2020-various-manufacturing-industries/</link>
					<comments>https://www.aiuniverse.xyz/global-he-ne-laser-market-2020-various-manufacturing-industries/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 20 Aug 2020 11:15:22 +0000</pubDate>
				<category><![CDATA[mechatronics]]></category>
		<category><![CDATA[Geographically]]></category>
		<category><![CDATA[global]]></category>
		<category><![CDATA[industries]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<category><![CDATA[Market]]></category>
		<category><![CDATA[Revenue]]></category>
		<category><![CDATA[SWOT]]></category>
		<category><![CDATA[Thorlabs]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11086</guid>

					<description><![CDATA[<p>Source:-clarkscarlet The global He-Ne Laser market report offers complete overview of various aspects. The report includes the major market conditions across the globe such as the product <a class="read-more-link" href="https://www.aiuniverse.xyz/global-he-ne-laser-market-2020-various-manufacturing-industries/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/global-he-ne-laser-market-2020-various-manufacturing-industries/">GLOBAL HE-NE LASER MARKET 2020 VARIOUS MANUFACTURING INDUSTRIES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-clarkscarlet</p>



<p>The global He-Ne Laser market report offers complete overview of various aspects. The report includes the major market conditions across the globe such as the product profit, price, production, capacity, demand, supply, as well as market growth structure. In addition, this report offers significant data through the SWOT analysis and Porter’s five forces investment return data, and investment feasibility analysis. The Global He-Ne Laser Market study is major compilation of significant information with respect to the competitor details of this market.</p>



<p><strong>Our best analysts have surveyed the market report with the reference of inventories and data given by the key players:</strong></p>



<p>Olympus<br>Thorlabs<br>Holmarc Opto-Mechatronics<br>RP Photonics<br>LASOS<br>IDEX Health &amp; Science<br>Lumentum Operations<br>PHYWE<br>CrystaLaser<br>Photonic Solutions<br>REO<br>Neoark</p>



<p>The Global He-Ne Laser market report covers deep insights of various vital aspects of the market. Report of He-Ne Laser provides the summarized study of several factors encouraging the growth of the market such as manufacturers, market size, types, applications, and regions. Likewise, to assess the market size, this study offers a precise analysis of the provider’s landscape as well as a corresponding detailed study about the manufacturers operating in the He-Ne Laser market. Moreover, in past few years owing to the new innovations and strategic ideas the He-Ne Laser market has recorded a significant development and is anticipated to further rise over the forecast period. Likewise, the information is also inclusive of the several regions where the global He-Ne Laser market has successfully gained the position.</p>



<p>The He-Ne Laser market report offers an in depth analysis about the cost structure, market size and PESTEL analysis which offers market outlook. Likewise, the global He-Ne Laser market report focuses on the major economies across the globe. Geographically, the He-Ne Laser market report covers the key regions and countries along with their revenue analysis. Using the report, consumer can identify several key dynamics of the market that holds an effective impact and govern. Moreover, the report is describing several types of He-Ne Laser market. Likewise, this report includes primary and secondary drivers, leading segments, market share, drivers, and the geographical landscape of the He-Ne Laser market. This study offers a separate analysis of the major trends in the existing market, mandates and regulations, micro &amp; macroeconomic indicators is also comprised in the global He-Ne Laser market report. By doing so, the study estimated the attractiveness of every major segment during the prediction period.</p>



<p><strong>Global He-Ne Laser Market Split by Product Type and Applications:</strong></p>



<p><strong>On the basis of Types:</strong></p>



<p>He: Ne 5:1-8:1<br>He: Ne 8:1-15:1<br>He: Ne 15:1-20:1<br>Other<br><strong>On the basis of Application:</strong></p>



<p>Scientific Use<br>Commercial Use<br>Industrial Use<br>Military Use<br>Geographically, the detailed analysis of consumption, revenue, He-Ne Laser market share and growth rate, historic and forecast (2015-2026) of the following regions are covered-</p>



<p>– North America (USA, Canada and Mexico)<br>– Europe (Germany, France, UK, Russia and Italy)<br>– Asia-Pacific (China, Japan, Korea, India and Southeast Asia)<br>– South America (Brazil, Argentina, Columbia etc.)<br>– Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)</p>



<p>The research report of global He-Ne Laser market comprises significant insights for the clients and vendors that are looking to maintain their market position as well as to expand the business in current and upcoming market scenario. Furthermore, the report provides the detailed study of the facts and figures, as viewer search for the scope in market growth related to the category of the product.</p>



<p>The global He-Ne Laser market report is designed through the detailed qualitative insights, verifiable projections, and historical data about the He-Ne Laser market size. This research report evaluates the market growth rate and the industry value on the basis of growth inducing factors, market dynamics, and other related data. Furthermore, report covers all the rule and regulations by government which are likely to impact on the market dynamics across the globe. In addition, government, policy makers and other regulatory associations are taking initiatives to promote the He-Ne Laser market. Hence, the study report on global He-Ne Laser market is beneficial for teachers, financial experts and other organizations.</p>
<p>The post <a href="https://www.aiuniverse.xyz/global-he-ne-laser-market-2020-various-manufacturing-industries/">GLOBAL HE-NE LASER MARKET 2020 VARIOUS MANUFACTURING INDUSTRIES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/global-he-ne-laser-market-2020-various-manufacturing-industries/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MANUFACTURING ROBOTS AND COBOTS TO STEADFAST INDUSTRY 4.0</title>
		<link>https://www.aiuniverse.xyz/manufacturing-robots-and-cobots-to-steadfast-industry-4-0/</link>
					<comments>https://www.aiuniverse.xyz/manufacturing-robots-and-cobots-to-steadfast-industry-4-0/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 13 Jul 2020 05:16:14 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Articulated Robots]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[autonomous]]></category>
		<category><![CDATA[Delta Robots]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10124</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net Robots have been deployed in manufacturing to fill several rule-based operations. Fully autonomous robots in manufacturing are utilized for high-volume, repetitive processes work which demands <a class="read-more-link" href="https://www.aiuniverse.xyz/manufacturing-robots-and-cobots-to-steadfast-industry-4-0/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/manufacturing-robots-and-cobots-to-steadfast-industry-4-0/">MANUFACTURING ROBOTS AND COBOTS TO STEADFAST INDUSTRY 4.0</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>Robots have been deployed in manufacturing to fill several rule-based operations. Fully autonomous robots in manufacturing are utilized for high-volume, repetitive processes work which demands speed and accuracy for the process of lifting, holding and moving heavy pieces. Manufacturing robots automate rule-based repetitive tasks, enable a human workforce to shift its focus on more productive and critical areas of operations to reduce error margins to negligible rates.</p>



<p>Here are the manufacturing robots and cobots (collaborative robots that work alongside human beings) to steadfast Industry 4.0-</p>



<h4 class="wp-block-heading"><strong>Articulated Robots</strong></h4>



<p>Articulated robots range from the simple two-jointed structures to systems with 10 or more interacting joints and materials. They are powered by a variety of means, including electric motors and are utilised for pick and place, dispensing, packaging, assembling and welding activities. Their multiple points of rotation with some devices give them as high as seven degrees of freedom. Articulated robots can move around obstacles that may block other types of robots and are most commonly used in assembling factories.</p>



<h4 class="wp-block-heading"><strong>Delta Robots</strong></h4>



<p>Delta robots also known as parallel or spider robots have force and collision detection sensors, and uses the force sensors for intricate assembly applications. The high-speed delta robots are deployed in the packaging industry, medical and pharmaceutical industry. For its stiffness delta robots also used for surgery, high precision assembly operations for electronic components and 3D printing.</p>



<h4 class="wp-block-heading"><strong>Cartesian Robots</strong></h4>



<p>The use of Cartesian or six-axis robots, in particular, is gaining prominence credit to its standardized components, and operator-friendly controls that lower cost and boost performance. Cartesian robots, also called gantry robots, are mechatronic devices that use motors and linear actuators to position a tool.</p>



<p>Cartesian robots can be used for pick-and-place, assembly, and even dispensation of materials such as adhesive. Cartesian-robot movements stay within the framework’s confines, but the framework can be mounted horizontally or vertically, or even overhead in certain gantry configurations.</p>



<h4 class="wp-block-heading"><strong>COBOTs (Collaborative Robots)</strong></h4>



<p>The International Federation of Robotics (IFR), collaborative industrial robots (COBOTS) defines cobots to be designed to perform collaborative tasks with humans in industrial sectors in four categories, including in cases where human and robot work from different physical workspaces without any human-robot contact or synchronization.</p>



<p>Sequential cobot collaboration occurs when there is an intersection between the humans and the robot’s workspace.Cobot cooperation occurs when humans and robots work on the same part at the same time, while in responsive cobot cooperation, the robot responds in real-time to the human’s movements.</p>



<h4 class="wp-block-heading"><strong>SCARA Robots</strong></h4>



<p>SCARA is an acronym for Selective Compliance Articulated Robot Arm, designed to handle a variety of material handling operations. SCARA was invented in 1978 by Professor Hiroshi Makino at Yamanashi University in Japan. SCARA robots were designed for assembly applications and they have been used in industrial assembly lines since 1981.</p>



<p>Due to their selective compliance, SCARAs are less rigid than Cartesian or gantry robots. However, they are more rigid than both 6-axis robots and Delta robots due to their rigid Z-axis. They are generally faster than 6-axis robots. The payload of SCARAs is generally quite low, but it is more than Delta robots which can lift between 0.3-8 kg. SCARAs are very well suited to high-speed assembly applications.</p>



<h4 class="wp-block-heading"><strong>Robotics in the Future</strong></h4>



<p>With rapid advancements in artificial intelligence, machine learning and augmented reality the next generation of robots powered by AI and automation will usher a disruptive innovation of sorts. Research estimates forecast that by 2050, drones will be commonplace in homes, helping with daily chores such as cleaning and gaming.</p>



<p>Are we ready to embrace a scenario where robots will do most of the rule-based manual tasks not just in factories nut in our homes commanding equal rights just like we have?</p>
<p>The post <a href="https://www.aiuniverse.xyz/manufacturing-robots-and-cobots-to-steadfast-industry-4-0/">MANUFACTURING ROBOTS AND COBOTS TO STEADFAST INDUSTRY 4.0</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/manufacturing-robots-and-cobots-to-steadfast-industry-4-0/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Four Innovations Taking Autonomous Vehicle AI to the Next Level</title>
		<link>https://www.aiuniverse.xyz/four-innovations-taking-autonomous-vehicle-ai-to-the-next-level/</link>
					<comments>https://www.aiuniverse.xyz/four-innovations-taking-autonomous-vehicle-ai-to-the-next-level/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 08 Jul 2020 06:01:17 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[autonomous]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10043</guid>

					<description><![CDATA[<p>Source: enterpriseai.news Autonomous vehicles are developed with a wide range of self-driving capabilities. Some vehicles provide basic automation, like cruise control and blind-spot detection, while other vehicles <a class="read-more-link" href="https://www.aiuniverse.xyz/four-innovations-taking-autonomous-vehicle-ai-to-the-next-level/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/four-innovations-taking-autonomous-vehicle-ai-to-the-next-level/">Four Innovations Taking Autonomous Vehicle AI to the Next Level</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterpriseai.news</p>



<p>Autonomous vehicles are developed with a wide range of self-driving capabilities. Some vehicles provide basic automation, like cruise control and blind-spot detection, while other vehicles are reaching fully-autonomous capabilities. Many of these capabilities are being made possible by AI technology.&nbsp;</p>



<p>However, before talking about big scale deployments for smart city transportation, more work is needed to improve the AI algorithms and mapping features powering autonomous vehicles. This article reviews innovations in Autonomous Vehicle AI and mapping, which might help secure a future in which autonomous vehicles are deployed city-wide.</p>



<p><strong>1. Deep Reinforcement Learning (DRL)</strong></p>



<p>Multiple types of machine learning are being applied to the development of autonomous vehicles, including DRL. This method combines the strategies of deep learning and reinforcement learning in an attempt to better automate the training of algorithms.&nbsp;</p>



<p>When implementing DRL, researchers use reward functions to guide software-defined agents toward a specific goal. Throughout training, these agents learn either how to attain that goal or how to maximize the reward over subsequent steps.&nbsp;</p>



<p>With the help of data collected from current autonomous vehicles, human drivers, and manufacturers, these agents can eventually be trained to operate independently. In the meantime, DRL has useful applications in lower level automation of vehicles. It can also provide value in vehicle manufacturing, where it can be applied to transform factory automation and vehicle maintenance.</p>



<p><strong>2. Path Planning</strong></p>



<p>Path planning is the decision-making process that autonomous vehicles use to determine safe, convenient, and economical routes. It requires taking into account street configurations, static and dynamic obstacles, and changing conditions. Currently, path planning is based on the combination of behavior-based models, feasible models, and predictive control models.&nbsp;</p>



<p>The process occurs roughly as follows:</p>



<ul class="wp-block-list"><li>The route planning mechanism determines a route from point A to B according to available roads or lanes.</li><li>A behavioral layer is then applied to determine vehicle movement according to environmental variables, such as traffic or weather conditions.</li><li>These determinations are applied to feasible and predictive control models to guide the operation of the vehicle.</li><li>As the trip progresses, feedback from sensors and analyses is fed to these components so adjustments can be made in real-time to adjust for errors or unforeseen events.</li></ul>



<p>In the above process, the relatively easy part is predicting how the vehicle itself will behave under certain conditions. What is more challenging is predicting what might happen in the environment the vehicle is operating in. For example, how can models predict when neighbor vehicles will swerve or pedestrians will enter the street.&nbsp;</p>



<p>To improve these predictions, researchers are applying multi-model algorithms to simulate possible trajectories and speeds of objects. These models enable the autonomous system to prepare for multiple scenarios simultaneously. Then based on evaluated probabilities of each scenario occurring, the system can define how the vehicle responds.</p>



<p><strong>3. SLAM</strong></p>



<p>Simultaneous localization and mapping (SLAM) is a technology used to orient vehicles in real-time to the surroundings. While still in its early stages, eventually, this technology can enable vehicles to operate autonomously in areas where maps are not available or where available maps are incorrect.&nbsp;</p>



<p>What makes this technology so challenging to implement is that currently, mapping is based on first knowing an object’s orientation. However, orientation is typically determined by comparing sensor data to pre-existing maps of surroundings. This dual reliance makes it difficult to achieve either goal when landmark information is unknown.</p>



<p>One of the ways this problem is overcome is by incorporating a rough map, based on GPS data, which is then refined as a vehicle moves through an environment. This requires vehicle sensors that constantly measure the environment and apply careful calculations to correct for vehicle movement and sensor accuracy.</p>



<p>An example of SLAM applications can be seen in Google’s autonomous vehicle used for generating Google Maps data. This vehicle uses a laser radar (LIDAR) assembly that is attached to the roof to measure its surroundings. </p>



<p>Measurements are taken at up to 10 times a second depending on how fast the vehicle is moving. The data collected is then passed through an array of statistical models, including Bayesian filters and Monte Carlo simulations to accurately improve existing maps.</p>



<p><strong>4. HD Maps</strong></p>



<p>High-definition (HD) maps are maps that include minute environmental details, often down to a centimeter scale. These maps include the details that live drivers would be able to see and interpret in real-time while driving but which autonomous vehicles need ahead of time. For example, lane markings, curve angles, road boundaries, or pavement gradients.&nbsp;</p>



<p>The level of detail provided by HD maps helps autonomous vehicles more accurately predict behavior and enables more accurate direction. This doesn’t eliminate the need to evaluate environmental changes in real-time. However, it does lighten the load of how thoroughly sensor data must be processed and analyzed.&nbsp;</p>



<p><strong>Conclusion</strong></p>



<p>AI algorithms are just one part of the components needed to power fully autonomous vehicles. Growth is also driven by the integration of higher quality data. For example, data collected from advanced sensors or derived from more accurate maps. While deep learning models have greatly contributed to the improvement of autonomous vehicle AI, there are still many challenges these vehicles face, and which should be dealt with before true maturity can be achieved.</p>
<p>The post <a href="https://www.aiuniverse.xyz/four-innovations-taking-autonomous-vehicle-ai-to-the-next-level/">Four Innovations Taking Autonomous Vehicle AI to the Next Level</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/four-innovations-taking-autonomous-vehicle-ai-to-the-next-level/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI Enhanced Robotics and The Future of Manufacturing</title>
		<link>https://www.aiuniverse.xyz/ai-enhanced-robotics-and-the-future-of-manufacturing/</link>
					<comments>https://www.aiuniverse.xyz/ai-enhanced-robotics-and-the-future-of-manufacturing/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 17 Jun 2020 06:12:29 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9576</guid>

					<description><![CDATA[<p>Source: metrology.news In today’s manufacturing, robots deployed across various industries are mostly doing repetitive tasks. The robots’ overall task performance hinges on the accuracy of their controllers <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-enhanced-robotics-and-the-future-of-manufacturing/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-enhanced-robotics-and-the-future-of-manufacturing/">AI Enhanced Robotics and The Future of Manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: metrology.news</p>



<p>In today’s manufacturing, robots deployed across various industries are mostly doing repetitive tasks. The robots’ overall task performance hinges on the accuracy of their controllers to track predefined motions. The ability of robots to handle unconstructed, complex environments, such as the flexible grasping of previously unknown objects or the assembly of new components, is very limited. Endowing machines with greater levels of intelligence to acquire skills autonomously and to generalize unseen situations would be a game-changer for quite a few industry sectors.</p>



<p>The main challenge to robot evolution is the need to design adaptable yet robust control algorithms that can address all possible system behaviors and also the necessity of ‘behavior generalization,’ e.g. the ability to react to unforeseen situations. Two forms of artificial intelligence, Deep Learning and Reinforcement Learning (that can use Deep Learning), hold notable promise for solving such challenges because they enable robots in manufacturing systems to deal with uncertainties, to learn behaviors through interaction with their surrounding environments, and ideally generalize to new situations. Let’s take a look at how Deep Learning and Reinforcement Learning play a key role in the aforementioned use cases: flexible picking and the assembly of new components.</p>



<p><strong>Flexible grasping made possible through Deep Learning</strong></p>



<p>Humans are equipped with a universal picking skill. Even if they encounter an object that they have never seen or grasped before, they will immediately know where to grasp the object in order to successfully lift it. Robots in today’s manufacturing must be explicitly programmed so that they can approach a predefined grasp pose and execute the grasp. This requires the objects to be grasped to be always in the same position and orientation (think of an assembly line). The challenge to programmers is finding a way to get robots to grasp an unknown object at any orientation. This is where Deep Learning comes in.</p>



<p>Deep Learning operates through artificial digital neural networks—large, non-linear function approximators that are loosely inspired by the human brain. State-of-the art neural networks have millions of parameters. Using a dataset of input-output relationships, these parameters can be set so that the neural network can predict the specific output for a given input.</p>



<p>This is how Deep Learning can be applied to grasping. Instead of programming the robot on&nbsp;<em>how</em>&nbsp;to grasp, the programmers provide the robot, via the neural networks,&nbsp;<em>examples</em>&nbsp;of grasping. The robot training data consists of images or models of various objects as well as how to grasp them. Given a database of millions of such examples, the neural network learns how to compute grasps for any given image of an object. These examples can be conveniently created in simulation. The robot masters the skill of grasping without executing a single grasp in the real-world.</p>



<p>While there are many examples of Deep Learning-based approaches for grasping, the SPS show 2018 was the first venue where these algorithms were demonstrated with real industrial hardware by Siemens: a grasp-capable neural network deployed into an industrial platform called the SIMATIC TM NPU, which is the first industrial controller for AI applications. At the Hannover Fair 2019, an upgraded version of the algorithm and an object-recognition neural network were combined for a bin-picking demonstration. The result was the first deep learning-based bin picking fully implemented on the controller level with a PLC and an NPU</p>



<p><strong>Solving industrial assembly tasks with reinforcement learning</strong></p>



<p>Another intelligent-robot approach to industrial tasks is based on reinforcement learning (RL). RL is a framework of principles that allows robots to “learn” behaviors through interactions with the environment; i.e., the data comes from actual surroundings. Unlike traditional feedback robot-control methods, the core idea of RL is to provide robot controllers with high-level specifications of&nbsp;<em>what</em>&nbsp;to do instead of&nbsp;<em>how</em>&nbsp;to do it. As the robot interacts with the environment and collects observations and rewards, the RL algorithm reinforces those behaviors that yield high rewards. Recent progress in RL research introduced deep neural networks for modelling the robot’s behavioral policy and its dynamics.</p>



<p>While the idea of RL is very promising for creating autonomous systems that learn, the adoption has been limited so far, because such large amounts of data are needed for robots to learn successful control policies. Thus, executing all this training on real robot hardware is problematic, because it takes such a long time and results in wear and tear on the equipment. Recent research in RL is aimed toward reducing the required training amount on real robots.</p>



<p>In fact, a method called Residual RL has been applied to various real-world assembly tasks in which a robot learned successful assembly procedures. That has been possible because Residual RL requires only a fraction of the learning samples in the real world compared to pure RL. This approach is a form of combined-control behavior, in which part of the problem facing the robot is solved with conventional feedback control (i.e., position control) and the rest is handled by the Residual RL. Siemens Corporate Technology researchers, in collaboration with UC Berkeley, figured out this data-driven approach, with the outputs of the conventional and RL controllers superimposed, forming the complete command for the robot’s actions.</p>



<p>This means that if a robot control problem can be partially handled with conventional feedback control, e.g. with position control, that can be broken down into two parts. The first part is solved with conventional hand-engineered control techniques and the second part is solved with Residual RL. The Residual RL portion also prevents the robot from unsafe “exploratory behavior”—meaning the robot does not damage itself or the environment during learning—which is an important prerequisite in manufacturing applications.</p>



<p><strong>Robots in the real world</strong></p>



<p>Ultimately, for the sake of flexible grasping and object assembly, what researchers want to create is a robot that can solve tasks in a flexible way by making its own decision, using its own skills, while the operator specifies high-level commands only. For example, instead of programming the trajectories for a successful grasp, we just ask the robot to grasp a component and then let the robot decide on the execution.</p>



<p>What does this all mean for the future of the manufacturing industry? AI-enhanced robotics is considered a prerequisite for flexible manufacturing and lot-size-one-production. When preprogramming isn’t a necessity for every single robot motion, then robots will become economically viable for rapidly changing product configurations.</p>



<p>This article was posted on the Siemens Blog. Authors: Juan Aparicio Ojea, Head of Research Group Advanced Manufacturing Automation and Eugen Solowjow, Staff Scientist.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-enhanced-robotics-and-the-future-of-manufacturing/">AI Enhanced Robotics and The Future of Manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-enhanced-robotics-and-the-future-of-manufacturing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ABI Research predicts 18% drop in new IoT devices in 2020</title>
		<link>https://www.aiuniverse.xyz/abi-research-predicts-18-drop-in-new-iot-devices-in-2020/</link>
					<comments>https://www.aiuniverse.xyz/abi-research-predicts-18-drop-in-new-iot-devices-in-2020/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Jun 2020 06:52:21 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[ABI Research]]></category>
		<category><![CDATA[automotive]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[Internet of Things]]></category>
		<category><![CDATA[IoT market]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9173</guid>

					<description><![CDATA[<p>Source: futureiot.tech While Internet of Things (IoT) will be integral to the long-term recovery plans of the post-COVID-19 economy worldwide, ABI Research said some facets of the IoT itself <a class="read-more-link" href="https://www.aiuniverse.xyz/abi-research-predicts-18-drop-in-new-iot-devices-in-2020/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/abi-research-predicts-18-drop-in-new-iot-devices-in-2020/">ABI Research predicts 18% drop in new IoT devices in 2020</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: futureiot.tech</p>



<p>While Internet of Things (IoT) will be integral to the long-term recovery plans of the post-COVID-19 economy worldwide, ABI Research said some facets of the IoT itself will be negatively impacted in the short term.</p>



<p>In its latest report “Assessing the Impact of COVID-19 on the IoT Market”, the technology research firm predicts an 18% drop in the net addition of IoT devices in 2020 as a result of manufacturing shut-downs, supply chain interruptions, and changes in connected product availability and demand.</p>



<p>This equates to the loss of 66 million potential Wide Area Network (WAN) connections over previous forecasts. Proportionally, the most heavily impacted markets will be fleet and other heavy vehicles/equipment. These are expensive assets that enterprises are buying less of in the interests of cost control. Fixed assets, digital signage, and kiosks also face huge impacts, as they are driven by a entertainment and retail sector that has been effectively put on hold by the massive reduction in personal mobility and footfall, and increased emphasis on online shopping.</p>



<p>“COVID-19’s impact on the IoT is three-fold. Some applications will experience a decline in shipments during 2020, ergo a reduction in the expected growth rate to their installed base. Yet, with no intrinsic change to their desirability and utility, they will return to expected growth in subsequent years,” said Jamie Moss, research director for M2M, IoT and IOE at ABI Research.</p>



<p>He added: &nbsp;“Some will experience a temporary stall in 2020 that will be compensated&nbsp; by increased activity immediately after, to bring things installed base expectations back into line. While others will experience fundamental shifts in demand, both positive and negative, for years to come as consumer and enterprise priorities shift in the light of COVID-19.”</p>



<p>In the consumer space, passenger vehicle and connected car markets are suffering considerably as people stay in place. Yet, by spending more time at home, improving the function and comfort of that environment is expected to boost smart home revenues. For enterprise, while utility metering initiatives face delays as home visits are temporarily prohibited, they are expected to bounce back fast. At the same time,&nbsp; asset tracking, inventory management, and condition-based monitoring are all set for greater long-term investment to build better businesses that allow people to do more with less and to reliably run things remotely.</p>



<p>Moss noted the diversity of the IoT and the pragmatic nature of its utility.</p>



<p>“At ABI Research, we analyse 32 IoT applications, that’s 32 different types of connected device embedded in the fabric of the world around us. Each provides information on where things are, what their status is, and what actions we must take. To be forewarned is to be forearmed and the mass use of Microcontroller Unit (MCU)-based Low Power Wide Area (LPWA) sensors can help us make a safer world, where we can quickly respond to threats. The IoT is a market that grows naturally as and when it right for it to do so, to deliver planned results. And the need for guaranteed outcomes has never been more acute than now.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/abi-research-predicts-18-drop-in-new-iot-devices-in-2020/">ABI Research predicts 18% drop in new IoT devices in 2020</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/abi-research-predicts-18-drop-in-new-iot-devices-in-2020/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Digital Twins Bridge the Data Gap for Deep Learning</title>
		<link>https://www.aiuniverse.xyz/digital-twins-bridge-the-data-gap-for-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/digital-twins-bridge-the-data-gap-for-deep-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Jun 2020 06:15:44 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<category><![CDATA[Semiconductor]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9161</guid>

					<description><![CDATA[<p>Source: eetasia.com In today’s world, data is king. The most highly valued companies in the world, whether Amazon, Apple, Facebook, Google, Walmart, or Netflix, have one thing <a class="read-more-link" href="https://www.aiuniverse.xyz/digital-twins-bridge-the-data-gap-for-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/digital-twins-bridge-the-data-gap-for-deep-learning/">Digital Twins Bridge the Data Gap for Deep Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: eetasia.com</p>



<p>In today’s world, data is king. The most highly valued companies in the world, whether Amazon, Apple, Facebook, Google, Walmart, or Netflix, have one thing in common: data is their most valuable asset. All of these companies have put that data to work using deep learning (DL). No matter what business you’re in, your data is your most valuable asset. You need to protect that asset by doing your own DL. The most important ingredient for DL success is having enough of the right kinds of data. That’s where digital twins come in.</p>



<p>A digital twin is a digital replica of an actual physical process, system, or device. Most importantly, digital twins can be&nbsp;<em>the</em>key to success for DL projects — especially DL projects that involve processes that are dangerous, expensive, or time-consuming.</p>



<p><strong>The promise of deep learning</strong></p>



<p>By now, nearly every industry — including semiconductor manufacturing — has recognized the potential of DL to create strategic advantage. DL employs neural networks to perform advanced pattern-matching. DL has been applied to such varied fields as facial and speech recognition, medical image analysis, bioinformatics, and materials inspection. In semiconductor manufacturing, DL has already been applied in areas such as defect classification. Most leading companies are scrambling to gain an advantage on this promising new playing field.</p>



<p>As companies start to explore DL and how it can help them, many are finding two things: first, it’s easy to get to a DL prototype, and second, it’s harder to get from “good prototype” results to “production-quality” results. With all of the low- to no-cost DL platforms, tools, and kits available today, initial development for DL applications is very quick and relatively easy in comparison to conventional application development. However, productizing DL applications isn’t any easier — and can be harder — than productizing conventional applications. The reason for this is data. Having enough data — and enough of the right kinds of data — is very often the difference between a DL application that doesn’t deliver production-quality results and one that revolutionizes the way you approach a particular problem.</p>



<p><strong>The DL data gap</strong></p>



<p>DL is based on pattern-matching, which is “programmed” by presenting neural networks with data that represent a target to be matched. Masses of data train a network to recognize the target (and to know when it’s not the target).</p>



<p>DL is incredibly powerful for quickly producing prototypes and providing proof-of-concepts. But the real advantage of DL isn’t the speed of development — it’s the fact that it unlocks the power of data to do things that can’t be done any other way.</p>



<p>The success of any DL application depends on the depth and breadth of the data set used in training. If the training data set is too small, too narrow, or too “normal,” a DL approach will not do better than standard techniques — in fact, it might do worse. It’s important to train a network with data representing all important states or presentations, in sufficient volumes for the network to learn to capture the correct essence of the problem at hand.</p>



<p>The difficulty for some fields, such as autonomous driving or semiconductor manufacturing, is that some of the most serious anomalous conditions occur (thankfully) very rarely. However, if you want a DL application to recognize a child darting in front of a car — or a fatal photomask error — you have to train the networks with a multitude of these scenarios, which don’t exist in any great volume in the real world (Figure 1). Digital twins are the only way to create enough anomalous data to properly train the networks to recognize these conditions.</p>



<p><strong>Digital twins bridge the gap</strong></p>



<p>Digital twins — virtual representations of actual processes, systems, and devices — are a key tool for creating the right amount of the right kind of data to train DL networks successfully. Last July, I was part of a TechTALK session at SEMICON West 2019 hosted by Dave Kelf of Breker Verification Systems, Inc., titled, “Applied AI in Design-to-Manufacturing.” In this panel session, I outlined the concept of using digital twins in semiconductor manufacturing. You can read an article covering this panel, written by the late and sorely missed Randy Smith for Semiwiki.</p>



<p>There are several reasons to use digital twins to create DL training data:</p>



<ul class="wp-block-list"><li>You may be in a position where the data you work with belongs to your customers, so you can’t use it for DL training.</li><li>You may be in a position where the resources you need to create the data you need for DL are fully committed to customer projects.</li><li>You have developed DL applications but have found that you need specific data to tune and train your neural networks to reach the required level of accuracy, but the cost of using mask shop/fab resources to create the data is prohibitive.</li><li>You know that you will not be able to find enough anomalous data to train your DL networks adequately. This last case is nearly universal.</li></ul>



<p>Ideally, to maintain full control over the data, you need three digital twins: a digital twin of the process/equipment that precedes yours in the manufacturing flow to provide input data for the simulation of your own process; a digital twin of your own process/equipment; and a digital twin of the process/equipment that follows yours in the manufacturing flow so that you can feed your output downstream for validation.</p>



<p>At the 2019 SPIE Photomask Technology conference, D2S presented a paper<sup>1</sup> demonstrating the creation of two digital twins — a scanning electron microscope (SEM) digital twin, and a curvilinear inverse lithography technology (ILT) digital twin — using DL techniques (Figure 2 shows the output of the SEM digital twin). While the output of digital twins in general is not accurate enough for manufacturing, these digital twins have been used both for training DL neural networks and validation. Importantly, these digital twins were generated by DL, rather than through simulation. This is an example of using DL as a tool to generate data needed to do other DL, and it demonstrates the compounding benefits of investing in DL.</p>



<p>All of this may sound like a lot of work — why not use a consulting company that will do DL for you? Because, remember, data is king! Protect that data and do DL yourself. Thankfully, there is an established path to success for you to follow.</p>



<p>First, you need to identify a project where DL will have an impact. You do need to choose carefully — DL is pattern-matching, so you need to pick something that falls into that realm. Image-based applications, such as defect categorization are obvious matches. Less obvious, but very powerful, is an application such as automatic discovery from machine logs. All of the equipment in the fab creates masses of operational data, which is rarely referenced until something goes wrong. Instead of using this valuable data merely as a diagnostic tool after the fact, you could monitor this data across the fab on an ongoing basis and train DL applications to flag patterns that precede problems, so you can identify and correct issues before they have impact, saving downtime.</p>



<p>Mycronic, for example, disclosed during an eBeam Initiative lunchtime talk at the 2020 SPIE Advanced Lithography Conference how the company put DL to work using data from its machine log files to predict anomalies like “mura” (uneven brightness effects that are annoying to the human eye, but that are notoriously difficult for image-processing algorithms to detect) on flat-panel display (FPD) masks.</p>



<p>In general, tedious and error-prone processes that human operators perform, but that are difficult to automate with traditional algorithms, are good candidates for deep learning.&nbsp; Whether through visual inspection or otherwise, typically in these problems, a human professional examining a specific situation would have a high probability of correctly performing the task. But presented with many instances of similar situations, humans make mistakes and become increasingly unreliable. DL, given one particular situation, may not do as well as a human can. But its probability of success for one situation extends to unlimited instances over unlimited time with the same probability of success. Humans make more mistakes as the volume of situations and/or time executing the task increases; DL’s probability of success does not degrade over volume or time.</p>



<p><strong>Help to bridge the gap to DL success</strong></p>



<p>Once you’ve identified a DL project, there are various resources available that can put you on the path to success while still enabling you to maintain strict control of your own data. If you’re new to DL and would like comprehensive support for your pilot DL project(s), you can join the Center for Deep Learning in Electronics Manufacturing (CDLe, www.cdle.ai), an alliance of industry leaders designed to pool talent and resources to advance the state-of-the-art in DL for our unique problem space and to accelerate the adoption of DL in each of our company’s products to improve our respective offerings for our customers.</p>



<p>If you’ve already started down the road with your DL projects but have encountered issues due to the DL data gap, D2S can help you to build the digital twins you need to augment and tune your data sets for DL success.</p>
<p>The post <a href="https://www.aiuniverse.xyz/digital-twins-bridge-the-data-gap-for-deep-learning/">Digital Twins Bridge the Data Gap for Deep Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/digital-twins-bridge-the-data-gap-for-deep-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Overcoming AI and Machine Learning barriers in manufacturing</title>
		<link>https://www.aiuniverse.xyz/overcoming-ai-and-machine-learning-barriers-in-manufacturing/</link>
					<comments>https://www.aiuniverse.xyz/overcoming-ai-and-machine-learning-barriers-in-manufacturing/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 11 Apr 2020 12:58:04 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8139</guid>

					<description><![CDATA[<p>Source: manufacturingglobal.com There has been a considerable amount of hype around Artificial Intelligence (AI) and Machine Learning (ML) technologies in the last five or so years. So <a class="read-more-link" href="https://www.aiuniverse.xyz/overcoming-ai-and-machine-learning-barriers-in-manufacturing/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/overcoming-ai-and-machine-learning-barriers-in-manufacturing/">Overcoming AI and Machine Learning barriers in manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: manufacturingglobal.com</p>



<p>There has been a considerable amount of hype around Artificial Intelligence (AI) and Machine Learning (ML) technologies in the last five or so years. So much so that AI has become somewhat of a buzzword – full of ideas and promise, but something that is quite tricky to execute in practice.</p>



<p>At present, this means that the challenge we run into with AI and ML is a healthy dose of skepticism. For example, we’ve seen several large companies adopt these capabilities, often announcing they intend to revolutionize operations and output with such technologies but then failing to deliver. In turn, the ongoing evolution and adoption of these technologies is consequently knocked back. With so many potential applications for AI and ML it can be daunting to identify opportunities for technology adoption that can demonstrate real and quantifiable return on investment.</p>



<p>Many industries have effectively reached a sticking point in their adoption of AI and ML technologies. Typically, this has been driven by unproven start-up companies delivering some type of open source technology and placing a flashy exterior around it, and then relying on a customer to act as a development partner for it.</p>



<p>However, this is the primary problem – customers are not looking for prototype and unproven software to run their industrial operations. Instead of offering a revolutionary digital experience, many companies are continuing to fuel their initial skepticism of AI and ML by providing poorly planned pilot projects that often land the company in a stalled position of pilot purgatory, continuous feature creep and a regular rollout of new beta versions of software. This practice of the never ending pilot project is driving a reluctance for customers to then engage further with innovative companies who are truly driving digital transformation in their sector with proven AI and ML technology.</p>



<p><strong>Innovation with direction</strong></p>



<p>A way to overcome these challenges is to demonstrate proof points to the customer. This means showing how AI and ML technologies are real and are exactly like we’d imagine them to be. Naturally, some companies have better adopted AI and ML than others, but since much of this technology is so new, many are still struggling to identify when and where to apply it.</p>



<p>For example, many are keen to use AI to track customer interests and needs. In fact, even greater value can be discovered when applying AI in the form of predictive asset analytics on pieces of industrial process control and manufacturing equipment. AI and ML can provide detailed, real-time insights on machinery operations, exposing new insights that humans cannot necessarily spot. Insights that can drive huge impact on businesses bottom line.</p>



<p>AI and ML is becoming incredibly popular in manufacturing industries, with advanced operations analysis often being driven by AI. Many are taking these technologies and applying it to their operating experiences to see where economic savings can be made. All organizations want to save money where they can and with AI making this possible. These same organizations are usually keen to invest in further digital technologies. Successfully implementing an AI or ML technology can significantly reduce OPEX and further fuel the digital transformation of an overall enterprise.</p>



<p><strong>Industrial impact</strong></p>



<p>Understandably, we are seeing the value of AI and ML best demonstrated in the manufacturing sector in both process and batch automation. For example, using AI to figure out how to optimize the process to achieve higher production yields and improve production quality. For example, in the food and beverage sectors, AI is being used to monitor production line oven temperatures, flagging anomalies &#8211; including moisture, stack height and color &#8211; in a continually optimized process to reach the coveted golden batch.</p>



<p>The other side of this is to use predictive maintenance to monitor the behavior of equipment and improve operational safety and asset reliability. A combination of both AI and ML is fused together to create predictive and prescriptive maintenance. Where AI is used to spot anomalies in the behavior of assets and recommended solution is prescribed to remediate potential equipment failure. Predictive and Prescriptive maintenance assist with reducing pressure on O&amp;M costs, improving safety, and reducing unplanned shutdowns.</p>



<p><strong>Technological relations</strong></p>



<p>Both AI, machine learning and predictive maintenance technologies are enabling new connections to be made within the production line, offering new insights and suggestions for future operations.</p>



<p>Now is the time for organizations to realize that this adoption and innovation is offering new clarity on the relationship between different elements of the production cycle &#8211; paving the way for new methods to create better products at both faster speeds and lower costs.</p>
<p>The post <a href="https://www.aiuniverse.xyz/overcoming-ai-and-machine-learning-barriers-in-manufacturing/">Overcoming AI and Machine Learning barriers in manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/overcoming-ai-and-machine-learning-barriers-in-manufacturing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to Fit Artificial Intelligence into Manufacturing</title>
		<link>https://www.aiuniverse.xyz/how-to-fit-artificial-intelligence-into-manufacturing/</link>
					<comments>https://www.aiuniverse.xyz/how-to-fit-artificial-intelligence-into-manufacturing/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 20 Sep 2019 06:52:46 +0000</pubDate>
				<category><![CDATA[AI-ONE]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[companies]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[MANUFACTURING]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4516</guid>

					<description><![CDATA[<p>Source: machinedesign.com Even early concerns related to artificial intelligence (AI) have not appeared to slow its adoption. Some companies are already seeing benefit and experts are saying <a class="read-more-link" href="https://www.aiuniverse.xyz/how-to-fit-artificial-intelligence-into-manufacturing/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-fit-artificial-intelligence-into-manufacturing/">How to Fit Artificial Intelligence into Manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: machinedesign.com</p>



<p>Even early concerns related to artificial intelligence (AI) have not appeared to slow its adoption. Some companies are already seeing benefit and experts are saying companies not adopting new technology will not be able to compete over time. However, AI adoption seems to be moving slowly despite early successful case studies.</p>



<h4 class="wp-block-heading">Why AI is Moving so Slow in Manufacturing?</h4>



<p>AI is growing, but exact numbers can be difficult to obtain, as the definition of technologies such as machine learning, AI, machine vision, and others are often blurred. For example, using a robotic arm and camera to inspect parts might be advertised as a machine learning or an AI device. While the device could work well, it might only be comparing images taken to others that were manually added it to a library. Some would argue this is not a machine learning device as it is making a preprogrammed decision, not one “learned” from the machine’s experience.</p>



<p>According to a Global Market Insights report publish in February this year, the marketsize for AI in manufacturing is estimated to have surpassed $1 billion in 2018, and is anticipated to grow at a CAGR of more than 40% from 2019 to 2025. But other resources insist that AI is moving slower. Some resources are often comparing AI case studies to the entire size of the manufacturing market, talking about individual companies investments, or specifically AI on a mass scale. From this prospective, AI growth is slower, and that is for a few reasons other than the aforementioned.</p>



<p>AI is still a new technology. Much of the success has been in the form of testbeds, not full-scale projects. This is because in large companies, one small adjustment could affect billions of dollars, so managers don’t want to test full-scale projects until they’&#8217;ve found the best solution. Additionally, companies of any size need to justify or guarantee a return on investment (ROI). This leads to smaller projects, a focus on low-hanging fruit, or projects that can be isolated as a testbed.</p>



<p><em>While the current investment wave of AI is at an all-time high, high-level adoption remains low. A research paper from the McKinsey Global Institute in 2017, “Artificial Intelligence, The Next Digital Frontier?”, reported high investment into AI. Early adopters of AI have common characteristics: digital maturity, larger business models, the adoption AI into core activities, the adoption of multiple technologies, a focus on growth over savings, and C-level support for AI. This diagram highlights areas where money has been invested into AI R&amp;D. (Courtesy: McKinsey Global Institute)</em></p>



<p>Smaller or isolated projects might work well as a test, but theoretically, AI should return greater benefits when operating at larger scales. This generally requires more connectivity and data to maintain accuracy. This is the next reason why AI might be moving slowly: Scale and connectivity.</p>



<p>Many companies have legacy equipment that does not provide data or a way to send data to another location. New technology is working on retrofitting legacy equipment, but then design engineers may have infrastructure problems. For example, some factories might lack easy access to power for smart sensors or an IT network to get the data where it can offer greater benefit.</p>



<p>While AI is growing and by all resources will continue to, maturity, confidence, ROI, scaling, and connectivity might be slowing mass adoption.</p>



<h4 class="wp-block-heading">What can AI do for Manufacturing and Design</h4>



<p>This section may be the most difficult, as it relates to the blurred lines and buzzwords previously mentioned. Designers and manufacturing have used CAD tools, machine vision, and predictive maintenance before. AI technology is advancing these technologies to new heights, but individual devices might be debated to where it is on the AI spectrum. &nbsp;</p>



<h4 class="wp-block-heading">AI CAD Tools</h4>



<p>Design engineers have specifications they have to achieve when developing new parts and devices. To do this it is important to understand a plethora of information from materials and processing, to the applications and needs of the end-user. With theoretical data CAD programs have tools like finite element analysis (FEA), and the design engineer must add the data manually or select it from a library.</p>



<p>One new tool in CAD technology uses AI to create a generative design. This takes the specifications and inputs needed for a design and generates all possible materials, geometries, and even costs. While new features are user-friendly, the technology is only as good as the user.</p>



<p>Not only do you need the knowledge of what should be added to the specification and inputs, but the user still needs to review the possibilities to select the best solution. This type of AI CAD technology helps amplify design engineers’ abilities and saves time because the design engineer doesn’t have to manually design multiple iterations.</p>



<p><em>This multi-material gripper was automatically designed using topology optimization. A user specifies desired grip direction and the applied forces. The shape of the part and the layout of the materials (rigid and elastic) are computed automatically to obtain a digital representation that can be directly 3D printed. Click&nbsp;<a href="http://news.mit.edu/2017/designing-microstructure-3-d-printed-objects-0804" target="_blank" rel="noreferrer noopener">here</a>&nbsp;for more details.&nbsp;</em></p>



<p>Currently, generative design will most likely produce a part that isn’t easy to manufacture using traditional processes. It can work well for 3D printing or additive processes. Companies are working on adding variables to the software to consider traditional, or subtractive processes, which should open AI CAD design technology to the masses.</p>



<h3 class="wp-block-heading">Digital Twins</h3>



<p>Moving forward, AI technology is building increasingly accurate models using these CAD and AI tools to include theoretical and real-world data. This combination of data is building accurate digital twins. Having a digital model lets engineers accurately predict wear, movement, and interactions with other devices.</p>



<p>AI technology in digital twins give engineers the ability to see and test parts, entire machines, production lines, and more, all digitally. With today’s ability to rent cloud computing power, both large and small companies can afford to use AI CAD technology to find bottlenecks, limitations, mistakes, or better features to accelerate time to market. Having a mass of data and mapping interactions of materials, machines, and processes lets engineers see how everything is connected and interacts. Design engineers will know how changing design specifications would affect the product, production line, supply chain, and maintenance.</p>



<h4 class="wp-block-heading">Predictive Maintenance</h4>



<p>A large concern for manufacturers in downtime. While IoT and connectivity are helping predict and detect problems before they occur, AI technology could keep things running smoother. For example, an engineer looking at a set of operational data of a machine think a vibration change means the cutting tool need to be replaced or sharpened soon.</p>



<p><em>Preventive maintenance agreements can increase system availability. Connectivity gives the ability for specialist to regularly inspect parts, and as AI programs advance parts can be monitored round the clock. Software can send notifications to engineers or specialists to alert them to changes in operation and suggest maintenance to optimize machine’s uptimes. (Credit: Bosch Rexroth)</em></p>



<p>It would be difficult for an engineer to know all the information that could be affecting the vibration on a machine. However, an AI system could instantly take data, the machines history, and other parameters to suggest a more informed decision. In this example, perhaps a material or speed change caused the vibration to increase due to resonance or natural frequency of the material. Accuracy is improved by connecting large datasets, processing data quickly to find patterns (or lack of patterns), and using AI to learn from past and present data to deliver more accurate models to help engineers make more informed decisions.</p>



<h4 class="wp-block-heading">AI Changes Factories and Education</h4>



<p>Eventually, connectivity and AI will grow where it will be possible to achieve an N-value where a program could update or improve a design autonomously based on real-world data. Mass adoption of AI technology can lead to mass customization and greatly increase flexibility. This will not only keep companies competitive but might have a ripple effect from industry to training and education.</p>



<p>“We are currently at stage one [of AI adoption] with information on Google taking your data and making suggestions. Stage two will be more disruptive, replacing some traditional training and education,” said Markus J. Buehler, a materials scientist and engineer at the Massachusetts Institute of Technology. “As we move to an AI Nero-network approach…future students will only need to know how to work with the AI programs and the computer will do the physics.”</p>



<p>The speed of technology is moving faster, and it is harder to compete if a company falls behind. Industry doesn’t have time for four-year degrees. Education could shift toward streamlined, employer-focused classes that teach students how to use AI programs. Some experts say this ripple affect is not only inevitable but necessary for a company’s survival. However, legacy equipment, confidence, a focus on ROI, and other factors are slowing AI’s adoption. According to Forbes Insights research, more than half of respondents (56%) in the automotive and manufacturing sectors plan to increase AI spending by less than 10%.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-fit-artificial-intelligence-into-manufacturing/">How to Fit Artificial Intelligence into Manufacturing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-to-fit-artificial-intelligence-into-manufacturing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
