<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Global IT Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/global-it/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/global-it/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 21 Nov 2019 05:29:20 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>How to choose the right AI model for your business</title>
		<link>https://www.aiuniverse.xyz/how-to-choose-the-right-ai-model-for-your-business/</link>
					<comments>https://www.aiuniverse.xyz/how-to-choose-the-right-ai-model-for-your-business/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 21 Nov 2019 05:29:19 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[DevOps practitioners]]></category>
		<category><![CDATA[Global IT]]></category>
		<category><![CDATA[IT skills]]></category>
		<category><![CDATA[software developer kit]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5295</guid>

					<description><![CDATA[<p>Source:-thehindubusinessline.com Organisations are looking to AI models to bring out digital transformation in business. But, understanding which kind of models are most suited to the business needs <a class="read-more-link" href="https://www.aiuniverse.xyz/how-to-choose-the-right-ai-model-for-your-business/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-choose-the-right-ai-model-for-your-business/">How to choose the right AI model for your business</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-thehindubusinessline.com<br></p>



<h4 class="wp-block-heading">Organisations are looking to AI models to bring out digital transformation in business. But, understanding which kind of models are most suited to the business needs is crucial</h4>



<p>As enterprises are discovering the benefits of artificial intelligence (AI), they realise the journey to AI is long and bumpy. Many CIOs (chief information officers) want AI to quickly transform their business without identifying which processes will perform better with AI. In an ideal world, one can pick any process, infuse it with AI and then discover the pros and cons during the journey. The best way to learn is on projects rather than researching through theoretical case studies.</p>



<p>But there needs to be some order to the madness. Can we generalise some patterns that could make it easy for business owners to apply AI? Let’s discuss some scalable, enterprise-relevant AI patterns.</p>



<p><strong>Discover new processes:</strong>&nbsp;This is about finding new opportunities afforded by AI. Consider the example of a defect in a machine, degrading over time. An experienced mechanical engineer can deduct the condition of the machine from the sound generated. What if there was a process wherein the mechanical engineer documents what he ‘hears’ and how he maintains the machine? This is where an acoustic AI model can be created, which can analyse sound samples of the machine to predict failures. It’s common sense for an engineer that a noisy machine is the first sign of mechanical failure; shouldn’t this important data can be put to value?</p>



<p>Imagine driving down a highway and an alert pops up on the dashboard saying, “Possible less lubricant”. It confirms the driver’s gut feeling that there’s something wrong in the car.</p>



<p>Most of the acoustic models today use humans to classify the data fed into the model; over time, the model learns to classify on its own. For example, we need to gather at least 10,000 sound clips of failed and normal ball bearings to classify the anomalous sounds and detect the issue. Not an easy task, but it can tremendously help in predicting failures in mines, underground subways, nuclear plants and highly critical sites unapproachable by humans.</p>



<p><strong>Reinvigorate old processes:</strong>&nbsp;This pattern improves existing processes by introducing AI. For instance, almost every organisation collects data from the employees’ badges, which provides information on access control and employee movement. Reinvigorating this process by adding occupancy sensors and then adding AI will help derive deeper insights such as the number of people per floor. This data can be fed into an AI model to predict the occupancy rate and help organisations reduce the cost associated with each desk and decide how much office space should be leased or vacated. The ability to predict churn of clients, machine failure, energy usage etc are all examples of how old processes can be reinvigorated with new AI models.</p>



<p><strong>Unlock data:</strong>&nbsp;Organisations can derive value from their data by applying AI. For example, machine learning algorithms can be used to detect fraud in financial transactions or even an asset defect, which would otherwise go unnoticed by humans. One Machine learning model can be fed time-series data to discover patterns of anomalies, while another can be fed asset manuals to find contextual text of the faults. One of the widely used examples of applying AI to businesses is handling unstructured data in the form of texts, videos, and tweets. Several organisations across industries have benefitted from this pattern, including telcos with millions of call records, banks with loan records, manufacturing units with work orders etc.</p>



<p><strong>Opening new channels:</strong>&nbsp;This is another area where we have seen several businesses apply AI successfully. This essentially means starting a new channel of interaction with customers or employees using AI-based virtual assistants with natural language processing technologies. Unlike the dated IVR system, this new channel is helping organisations reach their clients and service them in unique ways.</p>



<p>We can pick any AI process and it’s sure to fall in one of the four patterns mentioned above. What is needed is the right understanding of which process to choose and then applying the right AI methodology to solve the business problem. Once the process is chosen, along with the the right algorithm and data quality, one also needs to check for bias in the model. Explanation of why a certain recommendation is the most right one needs to be included too.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-choose-the-right-ai-model-for-your-business/">How to choose the right AI model for your business</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-to-choose-the-right-ai-model-for-your-business/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Global Business Intelligence Software Market 2019 – Looker, Microsoft, Tableau, Domo, Qlik</title>
		<link>https://www.aiuniverse.xyz/global-business-intelligence-software-market-2019-looker-microsoft-tableau-domo-qlik/</link>
					<comments>https://www.aiuniverse.xyz/global-business-intelligence-software-market-2019-looker-microsoft-tableau-domo-qlik/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 Nov 2019 12:38:49 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[business intelligence]]></category>
		<category><![CDATA[Global IT]]></category>
		<category><![CDATA[IT technology]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Software-Market]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5292</guid>

					<description><![CDATA[<p>Source:-galusaustralis.com Global Business Intelligence Software Market is forecast to bring about afairly desirable remuneration portfolio by the end of the forecast period.Certainly, the report not only includes a <a class="read-more-link" href="https://www.aiuniverse.xyz/global-business-intelligence-software-market-2019-looker-microsoft-tableau-domo-qlik/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/global-business-intelligence-software-market-2019-looker-microsoft-tableau-domo-qlik/">Global Business Intelligence Software Market 2019 – Looker, Microsoft, Tableau, Domo, Qlik</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-galusaustralis.com<br></p>



<p style="text-align:left"><strong>Global Business Intelligence Software Market</strong> is forecast to bring about afairly desirable remuneration portfolio by the end of the forecast period.Certainly, the report not only includes a modest growth rate over the forecast time frame but also contains a reliable overview of this business. The study involves overall growth opportunities and valuation currently this market is holding. Additionally, the report involves classified segmentation of Business Intelligence Software market.</p>



<p><strong>Global Business Intelligence Software Market: Key players</strong></p>



<p>Looker<br>Microsoft<br>Tableau<br>Domo<br>Qlik<br>Zoho<br>SAP<br>Oracle<br>Cognos<br>SAS<br>Information Builders<br>Yellowfin<br>TIBCO<br>MicroStrategy<br>Targit<br>InetSoft</p>



<p><strong>Market Segment by Type covers:</strong></p>



<p>Mobile<br>Cloud</p>



<p><strong>Market Segment by Applications can be divided into:</strong></p>



<p>SMEs<br>Large Organization<br>Other</p>



<p><strong>Regional analysis covers:</strong><br>• North America (USA, Canada, and Mexico)<br>• Europe (Russia, France, Germany, UK, and Italy)<br>• Asia-Pacific (China Korea, India, Japan, and Southeast Asia)<br>• South America (Brazil, Columbia, Argentina, etc.)<br>• The Middle East and Africa (Nigeria, UAE, Saudi Arabia, Egypt, and South Africa)</p>



<p><strong>Key Highlights of the Business Intelligence Software Market report:</strong><br>• The key details related to Business Intelligence Software industry like the product definition, cost, variety of applications, demand and supply statistics are covered in this report<br>• Competitive study of the major players will help all the market players in analyzing the latest trends and business strategies<br>• Holistic study of market segments and sub-segments will help the readers in planning the business strategies<br>• Figure Global Production Market Share of Business Intelligence Software market by Types and by Applications in 2019</p>



<p>The report has provided quantitative and qualitative analysis along with absolute opportunity assessment in the report. Also, the report offers Porter’s Five Forces analysis and PESTLE analysis for more detailed contrast studies. Each section of the report has something valuable that helps companies for improving their sales and marketing strategy, gross margin, and profit margins. Using the report as a tool for gaining insightful Business Intelligence Software market analysis, players can identify the much-required changes in their operation and improve their approach to doing business.</p>



<p>The report provides comprehensive information to identify market segments that help to improve the quality of business decision-making based on demand, sales, and production based on application-level analysis and regional level. Further, the report has been analyzed graphically to make this report more effective and understandable. The experts have constructed the detailed study market 2019 in a structured format for better analysis.</p>



<p><strong>Chapters involved in Business Intelligence Software market report:</strong><br>Chapter 1: Market Overview, Drivers, Restraints and Opportunities, Segmentation overview<br>Chapter 2: Market Competition by Manufacturers<br>Chapter 3: Production by Regions<br>Chapter 4: Consumption by Regions<br>Chapter 5: Production, By Types, Revenue and Market share by Types<br>Chapter 6: Consumption, By Applications, Market share (%) and Growth Rate by Applications<br>Chapter 7: Complete profiling and analysis of Manufacturers<br>Chapter 8: Manufacturing cost analysis, Raw materials analysis, Region-wise manufacturing expenses<br>Chapter 9: Industrial Chain, Sourcing Strategy and Downstream Buyers<br>Chapter 10: Marketing Strategy Analysis, Distributors/Traders<br>Chapter 11: Market Effect Factors Analysis<br>Chapter 12: Market Forecast<br>Chapter 13: Business Intelligence Software Research Findings and Conclusion, Appendix, methodology and data source</p>
<p>The post <a href="https://www.aiuniverse.xyz/global-business-intelligence-software-market-2019-looker-microsoft-tableau-domo-qlik/">Global Business Intelligence Software Market 2019 – Looker, Microsoft, Tableau, Domo, Qlik</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/global-business-intelligence-software-market-2019-looker-microsoft-tableau-domo-qlik/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning-assisted molecular design for high-performance organic photovoltaic materials</title>
		<link>https://www.aiuniverse.xyz/machine-learning-assisted-molecular-design-for-high-performance-organic-photovoltaic-materials/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-assisted-molecular-design-for-high-performance-organic-photovoltaic-materials/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 Nov 2019 11:33:40 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[DevOps development]]></category>
		<category><![CDATA[Electrical Engineering]]></category>
		<category><![CDATA[Global IT]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[technic transformations]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5267</guid>

					<description><![CDATA[<p>Source:-phys.org To synthesize high-performance materials for organic photovoltaics (OPVs) that convert solar radiation into direct current, materials scientists must meaningfully establish the relationship between chemical structures and their photovoltaic properties. In <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-assisted-molecular-design-for-high-performance-organic-photovoltaic-materials/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-assisted-molecular-design-for-high-performance-organic-photovoltaic-materials/">Machine learning-assisted molecular design for high-performance organic photovoltaic materials</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-phys.org<br></p>



<p>To synthesize high-performance materials for organic photovoltaics (OPVs) that convert solar radiation into direct current, materials scientists must meaningfully establish the relationship between chemical structures and their photovoltaic properties. In a new study on <em>Science Advances</em>, Wenbo Sun and a team including researchers from the School of Energy and Power Engineering, School of Automation, Computer Science, Electrical Engineering and Green and Intelligent Technology, established a new database of more than 1,700 donor materials using existing literature reports. They used supervised learning with machine learning models to build structure-property relationships and fast screen OPV materials using a variety of inputs for different ML algorithms.</p>



<p>Using molecular fingerprints (encoding a structure of a molecule in binary bits) beyond a length of 1000 bits Sun et al. obtained high ML prediction accuracy. They verified the reliability of the approach by screening 10 newly designed donor materials for consistency between model predictions and experimental outcomes. The ML results presented a powerful tool to prescreen new OPV materials and accelerate the development of OPVs in materials engineering.</p>



<p>Organic photovoltaic (OPV) cells can facilitate direct and cost-effective transformation of solar energy into electricity with rapid recent growth to exceed power conversion efficiency (PCE) rates. Mainstream OPV research has focused on building a relationship between new OPV molecular structures and their photovoltaic properties. The traditional process typically involves the design and synthesis of photovoltaic materials for the assembly/optimization of photovoltaic cells. Such approaches result in time consuming research cycles that require delicate control of chemical synthesis and device fabrication, experimental steps and purification. The existing OPV development process is slow and inefficient with less than 2000 OPV donor molecules synthesized and tested so far. However, the data gathered from decades of research work are priceless, with potential values remaining to be fully explored to generate high-performance OPV materials.</p>



<p>To extract useful information from the data, Sun et al. required a sophisticated program to scan through a large dataset and extract relationships from among the features. Since machine learning (ML) provides computational tools to learn and recognize patterns and relationships using a training dataset, the team used a data-driven approach to enable ML and predict diverse material properties. The ML algorithm did not have to understand the chemistry or physics behind the materials properties to accomplish the tasks. Similar methods have recently predicted the activity/properties of materials successfully during materials discovery, drug development and materials design. Prior to ML applications, scientists had generated cheminformatics to establish a useful toolbox.</p>



<p>Materials scientists have only recently explored the applications of ML in the OPV field. In the present work, Sun et al. established a database containing 1719 experimentally tested donor OPV materials gathered from literature. They studied the importance of programming language expression of the molecules first to understand ML performance. They then tested several different types of expressions including images, ASCII strings, two types of descriptors and seven types of molecular fingerprints. They observed the model predictions to be in good agreement with the experimental results. The scientists expect the new approach to greatly accelerate the development of new and highly efficient organic semiconducting materials for OPV research applications.</p>



<p>The research team first transformed the raw data into a machine readable representation. A variety of expressions exist for the same molecule comprising vastly different chemical information presented at different abstract levels. Using a set of ML models, Sun et al. explored diverse expressions of a molecule by comparing their predicted accuracy for power conversion efficiency (PCE) to obtain a deep-learning model accuracy of 69.41 percent. The relatively unsatisfactory performance was due to the small size of the database. For instance, previously when the same group used a larger number of molecules of up to 50,000, the accuracy of the deep-learning model exceeded 90 percent. To fully train a deep-learning model, researchers must implement a larger database containing millions of samples.</p>



<p>Sun et al. only had hundreds of molecules in each category at present, making it difficult for the model to extract enough information for higher accuracy. While it is possible to fine-tune a pre-trained model to reduce the amount of data required, thousands of samples are still necessary to accomplish a sufficient number of features. This led to the option of increasing the size of the database when using images to express molecules.</p>



<p>The scientists used five types of supervised ML algorithms in the study, including (1) back propagation (BP) neural network (BPNN), (2) deep neural network (DNN), (3) deep learning, (4) support vector machine (SVM) and (5) random forest (RF). These were advanced algorithms, where BPNN, DNN and deep learning were based on the artificial neutral network (ANN). The SMILES code (simplified molecular-input line entry system) provided another original expression of a molecule, which Sun et al. used as inputs for four models. Based on the results, the highest accuracy approximated 67.84 percent for the RF model. As before, unlike with deep learning, the four classical methods could not extract hidden features. As a whole, SMILES performed worse than images as descriptors of molecules to predict the PCE (power conversion efficiency) class in the data.</p>



<p>The researchers then used molecular descriptors that can describe the properties of a molecule using an array of numbers instead of the direct expression of a chemical structure. The research team used two types of descriptors PaDEL and RDKIt in the study. After extensive analyses across all ML models, a large data size implied more descriptors irrelevant to PCE affecting the ANN performance. Comparatively, a small data size implied inefficient chemical information to effectively train ML models, when using molecular descriptors as input in ML approaches, the key relied on finding appropriate descriptors that directly related to the target object.</p>



<p>The team next used molecular fingerprints; typically designed to represent molecules as mathematical objects and originally created to identify isomers. During large-scale database screening, the concept is represented as an array of bits containing &#8220;1&#8221; s and &#8220;0&#8221; s to describe the presence or absence of specific substructures or patterns within the molecules. Sun et al. used seven types of fingerprints as inputs to train the ML models and considered the influence of the fingerprint length on the prediction performance of different models to obtain diverse fingerprints. For instance, molecular access system (MACCS) fingerprints contained 166 bits and were the shortest input and the results were unsatisfactory due to their limited information.</p>



<p>Sun et al. showed the best combination of programming language and ML algorithm obtained using Hybridization fingerprints of 1024 bits and RF, to achieve a prediction accuracy of 81.76 percent; where Hybridization fingerprints represented SP2 hybridization states of molecules. When the fingerprint length increased from 166 to 1024 bits, the performance of all ML models improved since longer fingerprints included more chemical information.</p>



<p>To test the reliability of the ML models, Sun et al. synthesized 10 new OPV donor molecules. Then used three representative fingerprints to express the chemical structure of the new molecules and compared the results predicted by the RF model and the experimental PCE values. The system classified eight of the 10 molecules. The results indicated the potential of the synthetic materials for OPV applications with additional experimental optimization for two of the new materials. A minor change in structure could cause a large difference in PCE values. Encouragingly, the ML models identified such minor modifications to facilitate favorable prediction results.</p>



<p>In this way, Wenbo Sun and colleagues used a literature database on OPV donor materials and a variety of programming language expressions (images, ASCII strings, descriptors and molecular fingerprints) to build ML models and predict the corresponding OPV PCE class. The team demonstrated a scheme to design OPV donor materials using ML approaches and experimental analysis. They prescreened a large number of donor materials using the ML model to identify leading candidates for synthesis and further experiments. The new work can speed up new donor material design to accelerate the development of high PCE OPVs. The use of ML in conjunction with experiments will progress materials discovery.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-assisted-molecular-design-for-high-performance-organic-photovoltaic-materials/">Machine learning-assisted molecular design for high-performance organic photovoltaic materials</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-assisted-molecular-design-for-high-performance-organic-photovoltaic-materials/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Economics of AI, Design Thinking, and Data Science for Smart Healthcare</title>
		<link>https://www.aiuniverse.xyz/economics-of-ai-design-thinking-and-data-science-for-smart-healthcare/</link>
					<comments>https://www.aiuniverse.xyz/economics-of-ai-design-thinking-and-data-science-for-smart-healthcare/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 Nov 2019 06:10:03 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Global IT]]></category>
		<category><![CDATA[software development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5242</guid>

					<description><![CDATA[<p>Source:-ehealth.eletsonline.com Health systems are multi- faceted and continually changing across a variety of contexts and health service levels. For example one of the critical challenges of the <a class="read-more-link" href="https://www.aiuniverse.xyz/economics-of-ai-design-thinking-and-data-science-for-smart-healthcare/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/economics-of-ai-design-thinking-and-data-science-for-smart-healthcare/">Economics of AI, Design Thinking, and Data Science for Smart Healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-ehealth.eletsonline.com<br></p>



<p>Health systems are multi- faceted and continually changing across a variety of contexts and health service levels. For example one of the critical challenges of the resource deficient public health infrastructures worldwide is the spread of the communicable diseases.</p>



<p>As seen during the outbreaks of the fatal communicable diseases like Severe Acute Respiratory Syndrome (SARS) in 2003, H1N1 in 2009, the Zika virus in 2016, Ebola and Middle East respiratory syndrome (MERS) in 2014, and the Nipah virus in 2018, infectious diseases can spread rapidly within the countries as well as across the national borders.</p>



<p>Artificial intelligence (AI) has been making its way into the healthcare sector, presenting a variety of possibilities in disease diagnosis, treatment, and prevention. The adoption of artificial intelligence in the healthcare sector is growing substantially. One of the other major problems plaguing India’s premier public hospitals is overcrowding. It takes months for even those with terminal diseases like cancer to get started with their treatment.</p>



<p>The presence of AI would allow individuals to easily access and secure patient’s medical data, then understand and timely analyse their illnesses. Successful implementation of AI can provide major relief to these issues.</p>



<p><strong>AI IN HEALTHCARE</strong></p>



<p>The most critical, vital and fundamental part for initialising Artificial Intelligence for the CEO of a company is to start thinking as a data scientist. This activity in itself is contradictory to the thought process of a doctor or a medical professional, therefore it is the primary requirement to employ an analytical data expert who is the blend of a mathematician, a computer scientist and a trend-spotter who has the curiosity to explore the problem that needs to be solved so as to make the organisation AI-enabled. The basic requirement to successfully implement AI for a smarter healthcare is the availability of adequate digitised data.</p>



<p>The primary objective in the health sector is the successful diagnosis of the ailment in which AI can play a significant role. For the analytics of the symptoms, the primary data is the medical record that the patients present in the first level of data documentation.</p>



<p>For a developing and an aspiring country like India where we like to heighten our reach to the world, simultaneously inwardly looking into the challenges of Indian medical system, it is very important and critical that the medical records of the patients are digitised.</p>



<p>Towards this the baby steps have been taken by the MeitY – Ministry of Electronics and Information Technology. However, MeitY is facing serious challenges in its efforts to adopt an Electronic Health Records (EHR) system—a digital database of every Indian’s medical record that can be accessed by all doctors and hospitals.</p>



<p>The major challenges to cope up with include the infrastructure creation, standards and inter- operability, research and development as well as the legal and policy framework which forms the foundation for mandating it for governance. And lack of these fundamental infrastructures would make it nearly impossible for a developing nation to establish AI in its true sense. There is a need to start a debate on the aspects of the department of involvement of AI in various health segments. It need not necessarily mean that all segments require the same level of AI infusion as the certain critical ones need.</p>



<p>The field of predictive analytics holds increasing promise for helping clinicians diagnose and treat patients. Machine-learning models can be trained to find patterns in patient data to aid in sepsis care, design safer chemotherapy procedures, and predict a patient’s risk of having cancer. The accuracy of decision making is vital in the health as it builds the patient doctor trust which forms the foundation cornerstone for any segment in health sector to flourish. There is a need to cautiously examine that the infusion of AI in the segment shouldn’t downwardly change this agreement.</p>



<p><strong>THE ECONOMICS OF AI IN HEALTHCARE</strong></p>



<p>AI is tightly linked with a very strong health sector business analytics which in turn has strong linkage’s with the big data generation and big data analytics in the sector. It is strongly suggested based on our evaluation that this realisation need to be market driven for health sector to flourish, as it has been seen in most of the government participations, after a quantum of load the services are not able to cope up with the load. AI eventually will settle down in providing healthcare to all financial segments of Indian citizens as it will optimise the cost of evaluated medical tests, cut expensive surgery’s and will eventually balance out the demand and supply chain. This technology will act as a catalyst for optimisation of process which otherwise would have taken many years to achieve. Implementation of AI will also lead to economic advantage in overall hospital care, clinical research, drug development and even in the insurance cause economic benefits. AI applications are revolutionising how the health sector works to reduce spending and improve patient outcomes. Health sector is the only sector where all 34 disruptive technology’s needs to be synchronised to achieve the eventual goal. Few of these technologies meet the governmental legal and policy framework whereas most of them are self-sustaining in the business environment.</p>



<p><strong>HEALTH SEGMENT AI DATA MONETISATION VALUE CHAIN</strong></p>



<p>The AI has started taking the centre stage due to benefits being visible in the public sectors, its positive effect on the population and the advances that would visibly reduce the data privacy issues in the medical electronic records. Still, the mind-melding requires political and governmental support.</p>



<p>The AI data monetisation would become a vast business opportunity. It will form a critical pillar towards the structured AI in the medical field. If we take the history as the guiding principal, the monetisation of medical big data can safely assume free medical scanning capabilities available to the patients just to have generated databases. It is expected that this industry towards which the seeds have just been sown, will form a vibrant system in another 2-3 years. Those days are now near when any government hospitals would have counters that will provide your complete health records, which will form the fodder for AI diagnostics machinery.</p>



<p>In 5-6 years, AI may assist doctors by providing assured diagnosis to patients. In 7 -8 years we will have alexia like devices which will assist and advice based on the health and photo inputs about the impending disease which may affect its users. The ecosystem need for AI driven e-health is vast and complex due to the trust element inside it, between the doctors and the patients. The three key areas of investment for AI in healthcare would include:</p>



<p>• Digitisation: Utilisation of AI to reduce the cost and time and hence increase the efficiency</p>



<p>• Engagement: Improvement of patients/consumers interaction with healthcare providers, systems and services.</p>



<p>• Diagnostics: Development of efficient tools/products/services for timely and accurate diagnoses and health advices.</p>



<p><strong>DIGITAL TRANSFORMATION AND AI ADVANTAGES</strong></p>



<p>There is an increasing need for healthcare establishments to implement solutions that effectively improvise treatment outcomes, manage rising costs and navigate through the demands confronting the sprawling healthcare system. Some of the potential applications and advantages of AI in health sector include:-</p>



<p><strong>&nbsp;DIGITAL CONSULTATION</strong></p>



<p>Many a times due to situations like calamities, overcrowding, or difficult geography the access to a physician becomes nearly impossible. At these conditions an applications can give a primary medical consultation based on the personal medical history and common medical information. The users can feed their symptoms into the application, the algorithm then using the speech recognition can compare against a database of illnesses to give the primary consultation.</p>



<p><strong>MANAGEMENT OF MEDICAL RECORDS</strong></p>



<p>The first step for effective implementation of AI in healthcare is compiling and analysing information (like medical records and other past history). Data management would be the most widely used application of artificial intelligence and digital automation as it would reduce the time consumed in the mundane works. Robots/machines would accumulate, store, and trace data to provide faster as well as more reliable access.</p>



<p><strong>FAST &amp; ACCURATE DIAGNOSTICS</strong></p>



<p>As AI systems would have the capability to learn from previous cases and store knowledge, also access stored knowledge from anywhere around the globe. It would play a very significant role in diagnostics.</p>



<p>It is scientifically proven that artificial neural networks can diagnose some other diseases includes eye problems, malignant melanoma etc in fast &amp; accurate manner.</p>



<p>In the medical field, Cancer is one of the major challenges in which early detection plays an important role. AI has the potential to diagnose diseases and illnesses through deep learning. For example, many medical facilities are shifting to Digital Breast Tomosynthesis (DBT) technology solutions as preferred method for screening and diagnostic mammography in order to detect and diagnose women with early-stage breast cancer.</p>



<p><strong>PERFORMING REPETITIVE TASKS</strong></p>



<p>Analysing lab reports, X-Rays, CT scans, data entry, and other routine tasks can all be done faster and more accurately without any biases by the trained machines. Cardiology and radiology are two disciplines where the amount of data to analyse can be vast, time consuming and also time critical. Proficient utilisation of AI would facilitate doctors to spare more of their time in the patient interactions/ treatments.</p>



<p><strong>DRUG RESEARCH AND DEVELOPMENT</strong></p>



<p>Development of pharmaceuticals through clinical trials can take more than a decade and cost billions of dollars. And if the whole process of research and development is assisted by an AI system it can make the process faster and cheaper. For example, during the Ebola virus spread, an AI programme was used to scan existing medicines that could be redesigned to fight the disease. The programme was able to identify two medications that could reduce Ebola infectivity in one day.</p>



<p><strong>GENETIC MEDICATION</strong></p>



<p>Genetics and genomics look for mutations and links to disease from the information in DNA. If a database of family genetic history of any hereditary disease is created and digitised accordingly, then using the appropriate AI algorithm, early diagnosis and appropriate treatments of ailments like cancer, physiological and vascular diseases can be done.</p>



<p><strong>HEALTHCARE SYSTEM ANALYSIS</strong></p>



<p>The growth of computational power has led to a substantial increase in the amount and granularity of stored digital medical and healthcare data. The ability of AI to quickly analyse huge volumes of this data and create meaningful and actionable insights will have weighty effects on how healthcare is delivered and received. AI can be utilised to scrutinise the data to highlight mistakes in treatments, workflow inefficiencies, billing errors, hospital administration, and unnecessary patient hospitalisations and hence increasing the efficiency of the healthcare system analysis. It can also, help doctors to work more reasonable hours by optimising shift scheduling with data and measuring physician satisfaction to further improve the system.</p>



<p><strong>REDUCE HUMAN ERRORS</strong></p>



<p>AI may be most effective at reducing human error. Humans after analysing the routine scenario many times get biased and results to an error at the diagnosis phase. Even due to any emotional trauma or stress on the doctor’s side might be a reason for the error in the diagnosis and the treatment. AI could be the best assistant as it would monitor the whole procedure and greatly decrease stressful situations.</p>



<p><strong>GOING FORWARD</strong></p>



<p>Humans are complex bio-chemical devices due to our complex evolution from a single cellular organism to a human machine over a billion years, the complexity is unimaginable. A complex AI with advance computing power will also take at least one generation to understand complete molecular level of this bio-chemical machine.</p>



<p>The AI has started taking baby steps by understanding the interdependence of various artworks of famous artists. These types of algorithms will also form the eventual symphony in understanding the human bio-chemical device. The policy making bodies need to understand, the effort towards these researches in the present context will have negligible benefits but will eventually make an exponential effect on eradication of diseases at genetical and molecular level with maximum economic benefits.</p>



<p>From hospital care to clinical research, drug development and insurance, AI applications would revolutionise how the health sector works to reduce spending and improve patient outcomes. These benefits will accrue incrementally, from automated operations, precision surgery, and preventive intervention. Healthcare providers need to have confidence over the algorithms to use them, and that often means they want to see clinical validation of it. Many will continue to be skeptical about adopting AI tools until a large body of proof verifies their outcomes.</p>



<p>There is reluctance on the patient side, too: Around one-fourth of the consumers surveyed by one of the leading IT multinational said that they would not use AI-powered health services and cited concerns about the technology as their reasoning, from not understanding enough about how AI works to worrying that the technology might not understand them.</p>



<p>To overcome the fears and concerns associated with AI, it’s imperative that providers work to implement these new, innovative technologies effectively by first carefully considering and researching the right solution. Then, providers should invest time considering and understanding how their system is capturing and collecting data in order to analyse it and check for errors.</p>



<p>With artificial intelligence, the patient can get doctor assistance without visiting hospitals/ clinics which results in cost-cutting. AI assistants provide online care &amp; assist patients to add their data more frequently via online medical records etc. The ability of AI to sift through large amounts of data can help hospital administrators optimise performance and improve the use of existing resources as well as saving cost and time.</p>



<p>The algorithms are able to ingest years of electronic health record data and apply data science and AI to learn how best to manage expensive constrained resources infusion chairs, operating rooms, imaging equipment, inpatient beds and more to improve patient access, decrease wait times and reduce healthcare delivery costs.</p>



<p>Physicians and healthcare organisations are facing big challenges nowadays. Physician breakdown is increasing and the repercussions are expensive. Rather than replacing human clinical judgement, artificial intelligence will augment the clinical intelligence to scale that we may not imagine today and economically benefiting the patients as well as the health sector which is the prime requirement of a developing country like India.</p>



<p>(The article has been written by Diana Yohannan, AI and Big data Analyst, Technology Enthusiast, and Pratyush Mathur , Technology&nbsp; and Business Analytics enthusiast, Narsee Monjee Institute of Management Studies. Views expressed are a personal opinion.)</p>
<p>The post <a href="https://www.aiuniverse.xyz/economics-of-ai-design-thinking-and-data-science-for-smart-healthcare/">Economics of AI, Design Thinking, and Data Science for Smart Healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/economics-of-ai-design-thinking-and-data-science-for-smart-healthcare/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How we can use Deep Learning with Small Data? – Thought Leaders</title>
		<link>https://www.aiuniverse.xyz/how-we-can-use-deep-learning-with-small-data-thought-leaders/</link>
					<comments>https://www.aiuniverse.xyz/how-we-can-use-deep-learning-with-small-data-thought-leaders/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 Nov 2019 06:03:56 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Global IT]]></category>
		<category><![CDATA[Machine intelligence]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5239</guid>

					<description><![CDATA[<p>When it comes to keeping up with emerging cybersecurity trends, the process of staying on top of any recent developments can get quite tedious since there’s a <a class="read-more-link" href="https://www.aiuniverse.xyz/how-we-can-use-deep-learning-with-small-data-thought-leaders/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-we-can-use-deep-learning-with-small-data-thought-leaders/">How we can use Deep Learning with Small Data? – Thought Leaders</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><br></p>



<p>When it comes to keeping up with emerging cybersecurity trends, the process of staying on top of any recent developments can get quite tedious since there’s a lot of news to keep up with. These days, however, the situation has changed dramatically, since the cybersecurity realms seem to be revolving around two words- deep learning.</p>



<p>Although we were initially taken aback by the massive coverage that deep learning was receiving, it quickly became apparent that the buzz generated by deep learning was well-earned. In a fashion similar to the human brain, deep learning enables an AI model to achieve highly accurate results, by performing tasks directly from the text, images, and audio cues.</p>



<p>Up till this point, it was widely believed that deep learning relies on a huge set of data, quite similar to the magnitude of data housed by Silicon Valley giants Google and Facebook to meet the aim of solving the most complicated problems within an organization. Contrary to popular belief, however, enterprises can harness the power of deep learning, even with access to a limited data pool.</p>



<p>In an attempt to aid our readers with the necessary knowledge to equip their organization with deep learning, we’ve compiled an article that dives deep (no pun intended) into some of the ways in which enterprises can utilize the benefits of deep learning in spite of having access to limited, or ‘small’ data.</p>



<p>But before we can get into the meat of the article, we’d like to make a small, but highly essential suggestion- start simple. However, before you start formulating neural networks complex enough to feature in a sci-fi movie, start by experimenting with a few simple and conventional models, (e.g. random forest) to get the hang of the software.</p>



<p>With that out of the way, let’s get straight into some of the ways in which enterprises can amalgamate the deep learning technology while having access to limited data.</p>



<p><strong>#1- Fine-lining the baseline model:</strong></p>



<p>As we’ve already mentioned above, the first step that enterprises need to take after they’ve formulated a simple baseline deep learning model is to fine-tune them for the particular problem at hand.</p>



<p>However, fine-tuning a baseline model sounds much difficult on paper, then it actually is. The fundamental idea behind fine-tuning a large data set to cater to the specific needs of an enterprise is simple- you take a large data, that bears some resemblance to the domain you function in, and then fine-tune the details of the original data set, with your limited data.</p>



<p>As far as obtaining the large data set is concerned, enterprise owners can rely on ImageNet, which subsequently also provides an easy to fix to any problems of image classification as well. The dataset hosted by ImageNet allows organizations access to millions of images, which are divided across multiple classes of images, which can be useful to enterprises hailing from a wide variety of domains, including, but certainly not limited to images of animals, etc.</p>



<p>If the process of fine-tuning a pre-trained model to suit the specific needs of your organization still seems like too much work for you, we’d recommend getting help from the internet, since a simple Google search will provide you with hundreds of tutorials on how to fine-tune a dataset.</p>



<p><strong>#2- Collect more data:</strong></p>



<p>Although the second point on our list might seem redundant to some of our more cynical readers, the fact of the matter remains- when it comes to deep learning, the larger your data set is, the more likely you are to achieve more accurate results.</p>



<p>Although the very essence of this article lies in providing enterprises with a limited data set, we’ve often had the displeasure of encountering too many “higher-ups,” who treat investing in the collection of data equivalent to committing a cardinal sin.</p>



<p>It is all too often that businesses tend to overlook the benefits offered by deep learning, simply because they are reluctant to invest time and effort in the gathering of data. If your enterprise is unsure about the amount of data that needs to be collected, we’d suggest to plot learning curves, as the additional data is integrated into the model, and observe the change in model performance.</p>



<p>Contrary to the popular belief held by most CSO’s and CISO’s, sometimes the best way to solve problems is through the collection of more relevant, data. The role of CSO and CISO is extremely important in this case because there is always a threat of cyber-attacks. It is found that in 2019, the total global spending on cybersecurity takes up to $103.1 billion, and the number continues to rise. To put this into perspective, let’s consider a simple example- imagine that you were trying to classify rare diamonds, but have access to a very limited data set. As the most obvious solution to the problem dictates, instead of having a field day with the baseline model, just collect more data!</p>



<p><strong>#3- Data Augmentation:</strong></p>



<p>Although the first two points we’ve discussed above are both highly efficient in providing an easy solution to most problems surrounding the implementation of deep learning into enterprises with a small data set, they rely heavily on a certain level of luck to get the job done.</p>



<p>If you’re unable to have any success with fine-tuning a pre-existing data set either, we’d recommend trying data augmentation. The way that data augmentation is simple. Through the process of data augmentation, the input data set is altered, or augmented, in such a way that it gives a new output, without actually changing the label value.</p>



<p>To put the idea of data augmentation into perspective for our readers, let’s consider a picture of a dog. When rotated, the viewer of the image will still be able to tell that it’s an image of a dog. This is exactly what good data augmentation hopes to achieve, as compared to a rotated image of a road, which changes the angle of elevation and leaves plenty of space for the deep learning algorithm to come to an incorrect conclusion, and defeats the purpose of implementing deep-learning in the first place.</p>



<p>When it comes to solving problems related to image classification, data augmentation serves as a key player in the field and hosts a variety of data augmentation techniques that help the deep learning model to gain an in-depth understanding of the different classifications of images.</p>



<p>Moreover, when it comes to augmenting data- the possibilities are virtually endless. Enterprises can implement data augmentation in a variety of ways, which include NLP, and experimentation of GANs, which enables the algorithm to generate new data.</p>



<p><strong>#4- Implementing an ensemble effect:</strong></p>



<p>The technology behind deep learning dictates that the network is built upon multiple layers. However, contrary to popular belief maintained by many, rather than viewing each layer as an “ever-increasing” hierarchy of features, the final layer serves the purpose of offering an ensemble mechanism.</p>



<p>The belief that enterprises with access to a limited, or smaller data set should opt to build their networks deep was also shared in a NIPs paper, which mirrored the belief we’ve expressed above. Enterprises with small data can easily manipulate the ensemble effect to their advantage, simply by building their deep learning networks deep, through fine-tuning or some other alternative.</p>



<p><strong>#5- Incorporating autoencoders:</strong></p>



<p>Although the fifth point we’ve taken into consideration for has received only a relative level of success- we’re still on board with the use of autoencoders in order to pre-train a network and initialize the network properly.</p>



<p>One of the biggest reasons apart from cyber-attacks as to why enterprises fail to get over the initial hurdles of integrating deep learning is because of bad initialization, and it’s many pitfalls. Unsupervised pre-training often leads to poor, or incorrect execution of the deep learning technology, which is where autoencoders can shine.</p>



<p>The fundamental notion behind a neural network dictates the creation of a neural network that predicts the nature of the dataset being input. If you are unsure of how to use an autoencoder, there are several tutorials online that give clear cut instructions.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-we-can-use-deep-learning-with-small-data-thought-leaders/">How we can use Deep Learning with Small Data? – Thought Leaders</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-we-can-use-deep-learning-with-small-data-thought-leaders/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Our personal data needs protecting from Big Tech</title>
		<link>https://www.aiuniverse.xyz/our-personal-data-needs-protecting-from-big-tech/</link>
					<comments>https://www.aiuniverse.xyz/our-personal-data-needs-protecting-from-big-tech/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 Nov 2019 05:46:12 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Global IT]]></category>
		<category><![CDATA[IT technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5233</guid>

					<description><![CDATA[<p>Source:-ft.com Big data, AI and the promises of other new technology-enabled possibilities are being talked about all the time. For many of us in the supply chain, <a class="read-more-link" href="https://www.aiuniverse.xyz/our-personal-data-needs-protecting-from-big-tech/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/our-personal-data-needs-protecting-from-big-tech/">Our personal data needs protecting from Big Tech</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-ft.com<br></p>



<p>Big data, AI and the promises of other new technology-enabled possibilities are being talked about all the time. For many of us in the supply chain, logistics and transportation industry, it is hard to imagine the efficiencies tech will bring, as we are still trying to understand how to best compile excel reports and make sense of it all.</p>



<p>In the age of IoT, the connectivity of devices is rapidly progressing. In the case of logistics, the end-to-end tracking of cargo in real-time is now a reality. What was once a ship-and-pray-it-arrives strategy has been transformed into an understanding of real-time cargo movements. We can now understand the exact location of our cargo on its route to the end-consumer, which gives us an enormous wealth of data for analysis, thereby opening up the entire shipping process to new optimizations and adjustments.</p>



<p>As you know, logistics is complicated , and simply being able to understand the movement of your goods does not show the full picture. The coordination of the movement of goods is only one facet in a very complex chain. The procurement of freight is a critical node in the supply chain and sits on an invaluable set of data that is increasingly becoming the epicenter to support an efficient and innovative supply chain strategy.</p>



<p>All these areas are producing a mass amount of data, however, because data is available, it doesn’t equate to an instant understanding of the daily challenges faced by professionals day in and day out, and definitely doesn’t point to any obvious solutions.</p>



<p>Data on its own is worthless. A few years ago, all industries were busy throwing the Big Data term around. So, you have all this data in all shapes and forms. So, what. What are you doing with it? Do you know what to do with it? Do you have the right tools, the right people on board to appreciate the value of the data you have collected? How are you connecting all this data to provide value to your business? The industry is complex, not to mention the ocean shipping market.</p>



<p>Each mode of transportation has unique pricing strategies, contract management and regulatory constraints. While as an industry we can appreciate all this as a generality, the complexity and nuances can be mind-blowing. Adding to the challenge, enterprises may be organized as separate entities, and most often use different software and technology stacks. Nowadays, there is a system for everything: TMS, quoting tools, ERPs per industry type, supply chain analytical tools; then not to mention the myriad of data from static reports on volume, transit times, capacity, and the list goes on. So, in a connected world with a wealth of data waiting for analysis, are we able to progress without first breaking down the classic data silo problem?</p>



<p>Logistics is not alone in this problem. Nearly all industries that operate complex, siloed processes experience this challenge. The promising news in logistics is the commitment of leadership in embracing notions of digitalization and modernization to evolve their supply-chains. Complementary data sources. We don’t need to reinvent the wheel. We must work together and connect, not just devices, but information.</p>



<p>Our industry got so busy creating systems to crunch, analyze and make sense of all sorts of data, that we forgot to ask if the end user could actually manage and make sense of all the data, and all the disparate tools.</p>



<p>At Xeneta, we strive to support the modernization efforts of industry leaders in logistics. Our company was built on the promise of making data accessible, breaking down silos across the entire industry and providing data transparency in an otherwise opaque world. We offer real-time market freight-rate data, with the largest coverage of trade-lanes in the industry. We are able to do this by providing one common platform that unites all players in the industry: shippers, freight forwarders and carriers. The goal of modernization of the supply chain is not in the distant future – we are experiencing the promises of sharing data in the logistics industry today.</p>



<p>It is our collective responsibility to educate ourselves and embrace the value of opening up data to unlock intelligence. At Xeneta, we are contributing to the future of the supply chain with real-time ocean and air freight rate data. The journey to complete optimization of the end-to-end supply chain, from real-time tracking to real-time cost analysis of all modes of transportation, may be a bit further down the road, but we are excited to be a part of this journey with customers and partners.</p>
<p>The post <a href="https://www.aiuniverse.xyz/our-personal-data-needs-protecting-from-big-tech/">Our personal data needs protecting from Big Tech</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/our-personal-data-needs-protecting-from-big-tech/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why Health Data is the Next in Big Data Analytics</title>
		<link>https://www.aiuniverse.xyz/why-health-data-is-the-next-in-big-data-analytics/</link>
					<comments>https://www.aiuniverse.xyz/why-health-data-is-the-next-in-big-data-analytics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 16 Nov 2019 05:29:37 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Global IT]]></category>
		<category><![CDATA[Google Cloud]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5200</guid>

					<description><![CDATA[<p>Source:-cxotoday.com “Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to <a class="read-more-link" href="https://www.aiuniverse.xyz/why-health-data-is-the-next-in-big-data-analytics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-health-data-is-the-next-in-big-data-analytics/">Why Health Data is the Next in Big Data Analytics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-cxotoday.com<br></p>



<p>“Data is the 
new oil. It’s valuable, but if unrefined it cannot really be used. It 
has to be changed into gas, plastic, chemicals, etc to create a valuable
 entity that drives profitable activity; so must&nbsp;data&nbsp;be broken down, 
analysed for it to have value.”</p>



<p>This comment by British mathematician and data science entrepreneur 
Clive Humby is what drives technology behemoths in the modern digital 
world where enterprises are seeking constantly seeking to help customers
 make choices based on their past preferences and brands are not averse 
to pay a king’s ransom to those who can crack this code.</p>



<p>Small wonder that Google recently gobbled up Fitbit for a whopping 
USD 2.1 billion, ostensibly to move into the wearables segment where its
 closest competitor Apple has opened up a big lead. However, the real 
reason lies elsewhere… Not just Google, even Apple and Facebook have 
been on the prowl seeking health data, given that healthcare industry is
 considered the next big boom sector.</p>



<p>A report published in The Wall Street Journal  describes details of Project Nightingale that Google has been running  below the radar with America’s second largest healthcare system  Ascension. Under the project, Google is sharing personal health data of  millions of patients so that its Cloud division can use it to develop  AI-based services for medical providers.</p>



<p>However, Google is not alone in this quest. The article suggests that
 Amazon, Apple and Microsoft are also aggressively seeking healthcare 
data though they haven’t yet managed to strike deals such as the one 
Google has with a healthcare system. What’s interesting is that Google 
claims that it is operating the project as a business associate of 
Ascension that guarantees health data with legal limitations.</p>



<p>With Indian healthcare sector witnessing several start-ups off late, 
there is no doubt that data-mining and analytics would soon become an 
all-pervading business option, which therefore means that our medical 
records too aren’t safe, more so since there is no real safety 
mechanisms such as the HIPAA (Health Insurance Portability and 
Accountability Act) guidelines in the United States.</p>



<p>Sure enough, the United States finds Google’s curiosity a bit too 
much to handle. A report published by CNET.com says the Office of Civil 
Rights in the Department of Health and Human Services is opening an 
inquiry into Project Nightingale to seek information about this mass 
collection of individuals’ medical records to ensure that HIPAA 
protections are fully implemented.</p>



<p>On its part, Google acknowledged the probe in a blog post.  Tariq Shaukat, President, Industry Products and Solutions, Google  Cloud, recalls that during the company’s Q2 earnings call, it was  mentioned that Google Cloud’s AI and ML solutions were helping  healthcare organizations like Ascension improve the quality of delivery  and outcomes.</p>



<p>“Our work with Ascension is exactly that—a business arrangement to  help a provider with the latest technology, similar to the work we do  with dozens of other healthcare providers<a href="https://cloud.google.com/customers/american-cancer-society/">. </a>These  organizations, like Ascension, use Google to securely manage their  patient data, under strict privacy and security standards. They are the  stewards of the data, and we provide services on their behalf.”</p>



<p>In other words, Google appears to be saying that the data stays with 
Ascension which uses the G-suite productivity tools to improve and 
enhance healthcare delivery via doctors and nurses. The blog also takes 
pains to explain that the data acquisition is HIPAA compliant and that 
Ascension’s data cannot be used for any other purpose than for providing
 services that Google is offering under the agreement with its business 
partner. “Patient data cannot and will not be combined with any Google 
consumer data,” says Tariq Shaukat in the blog.</p>



<p>However, this isn’t something that the industry is ready to accept 
immediately. TechCrunch.com quotes Mark Rothstein, a bio-ethicist and 
public health law scholar at the University of Louisville, to suggest 
that while Google is well within the HIPAA guidelines in working with 
patient data, the fact that it is using names and birthdates instead of a
 unique number isn’t kosher.</p>



<p>“The fact that this data is individually identifiable suggests that 
there’s an ultimate use where a person’s identity is going to be 
important,” says Rothstein. Which means Google have to build in a wall 
of anonymity for the data before it even considers using it to develop 
machine learning models that could then be sold to other interested 
enterprises.</p>



<p>Healthcare providers are right in mining data to develop personalised
 care and seek patterns that help detect medical conditions before the 
patient starts developing symptoms, but the trouble is that companies 
seeking such data from healthcare providers aren’t transparent about 
what else they could be using this data for.</p>



<p>In fact, Google itself had entered into a 10-year research  partnership with Mayo Clinic, whereby the latter’s medical records were  shifted to Google Cloud. Reporting about this move, Wire com  had wondered if the desire to bring AI into healthcare could eventually  turn into a patient’s nightmare as she gets swamped with offers and  treatments instead of a single doctor’s care.</p>



<p>At this point it time, the move appears unprecedented though a tad 
creepy. Like Clive Humby said, data is useful only if it is broken down 
and analysed. The challenge though is to understand whether enterprises 
plan to use it to improve healthcare or line their pockets further.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-health-data-is-the-next-in-big-data-analytics/">Why Health Data is the Next in Big Data Analytics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-health-data-is-the-next-in-big-data-analytics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
