<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>data privacy Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/data-privacy/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/data-privacy/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 10 Jul 2024 07:02:20 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>What are the ethical considerations for the widespread use of generative AI?</title>
		<link>https://www.aiuniverse.xyz/what-are-the-ethical-considerations-for-the-widespread-use-of-generative-ai/</link>
					<comments>https://www.aiuniverse.xyz/what-are-the-ethical-considerations-for-the-widespread-use-of-generative-ai/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Wed, 10 Jul 2024 07:02:18 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Accountability]]></category>
		<category><![CDATA[AI ethics]]></category>
		<category><![CDATA[Bias and Fairness]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[Human Autonomy]]></category>
		<category><![CDATA[Intellectual Property]]></category>
		<category><![CDATA[Job Displacement]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[Regulation]]></category>
		<category><![CDATA[Transparency]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=18973</guid>

					<description><![CDATA[<p>The widespread use of generative AI brings a range of ethical considerations that need to be carefully addressed to ensure responsible and fair deployment. Here are some <a class="read-more-link" href="https://www.aiuniverse.xyz/what-are-the-ethical-considerations-for-the-widespread-use-of-generative-ai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-are-the-ethical-considerations-for-the-widespread-use-of-generative-ai/">What are the ethical considerations for the widespread use of generative AI?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="1024" height="1024" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-10-12.29.18-An-illustration-showing-the-ethical-considerations-for-the-widespread-use-of-generative-AI.-The-image-should-include-visual-representations-of-key-iss.webp" alt="" class="wp-image-18974" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-10-12.29.18-An-illustration-showing-the-ethical-considerations-for-the-widespread-use-of-generative-AI.-The-image-should-include-visual-representations-of-key-iss.webp 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-10-12.29.18-An-illustration-showing-the-ethical-considerations-for-the-widespread-use-of-generative-AI.-The-image-should-include-visual-representations-of-key-iss-300x300.webp 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-10-12.29.18-An-illustration-showing-the-ethical-considerations-for-the-widespread-use-of-generative-AI.-The-image-should-include-visual-representations-of-key-iss-150x150.webp 150w, https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-10-12.29.18-An-illustration-showing-the-ethical-considerations-for-the-widespread-use-of-generative-AI.-The-image-should-include-visual-representations-of-key-iss-768x768.webp 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>The widespread use of generative AI brings a range of ethical considerations that need to be carefully addressed to ensure responsible and fair deployment. Here are some key ethical considerations:</p>



<ol class="wp-block-list">
<li><strong>Bias and Fairness</strong>:</li>
</ol>



<ul class="wp-block-list">
<li><strong>Data Bias</strong>: Generative AI systems can inherit biases present in their training data, leading to biased outputs that may reinforce stereotypes or discriminate against certain groups.</li>



<li><strong>Fairness</strong>: Ensuring that AI systems treat all individuals and groups fairly and do not perpetuate or amplify existing inequalities.</li>
</ul>



<p><strong>2. Privacy and Security</strong>:</p>



<ul class="wp-block-list">
<li><strong>Data Privacy</strong>: Generative AI models often require large amounts of data, raising concerns about the privacy of the individuals whose data is used.</li>



<li><strong>Security Risks</strong>: There is a risk of sensitive information being inadvertently generated or exposed, as well as potential misuse of AI for malicious purposes such as generating fake news or deepfakes.</li>
</ul>



<p><strong>3. Accountability and Transparency</strong>:</p>



<ul class="wp-block-list">
<li><strong>Accountability</strong>: Determining who is responsible for the actions and outputs of generative AI systems, particularly in cases of harm or unintended consequences.</li>



<li><strong>Transparency</strong>: Making AI systems understandable and transparent to users, including how they work and how decisions are made, to build trust and allow for scrutiny.</li>
</ul>



<p><strong>4.</strong> <strong>Intellectual Property and Ownership</strong>:</p>



<ul class="wp-block-list">
<li><strong>Content Ownership</strong>: Questions about who owns the content generated by AI, particularly when it is created using data from various sources.</li>



<li><strong>Intellectual Property</strong>: Ensuring that the use of data and content respects existing intellectual property laws and the rights of original creators.</li>
</ul>



<p><strong>5. Social and Economic Impact</strong>:</p>



<ul class="wp-block-list">
<li><strong>Job Displacement</strong>: The potential for generative AI to automate tasks and displace jobs, leading to economic disruption and the need for new forms of employment and training.</li>



<li><strong>Societal Impact</strong>: The broader impact on society, including the way information is created and consumed, and the potential for AI to influence public opinion and behavior.</li>
</ul>



<p><strong>6. Misinformation and Manipulation</strong>:</p>



<ul class="wp-block-list">
<li><strong>Fake Content</strong>: The ability of generative AI to create realistic but fake content, such as deepfakes, which can be used to spread misinformation and manipulate public perception.</li>



<li><strong>Trust in Information</strong>: The challenge of distinguishing between real and AI-generated content, potentially eroding trust in information sources.</li>
</ul>



<p><strong>7. Ethical Use and Regulation</strong>:</p>



<ul class="wp-block-list">
<li><strong>Ethical Guidelines</strong>: Developing and adhering to ethical guidelines for the development and use of generative AI to ensure it is used responsibly and for the benefit of society.</li>



<li><strong>Regulation</strong>: Implementing appropriate regulations to oversee the use of generative AI, ensuring it aligns with societal values and legal standards.</li>
</ul>



<p><strong>8.</strong> <strong>Autonomy and Human Agency</strong>:</p>



<ul class="wp-block-list">
<li><strong>Human Control</strong>: Ensuring that humans remain in control of AI systems and that AI does not undermine human autonomy or decision-making capabilities.</li>



<li><strong>Consent and Participation</strong>: Respecting the consent and participation of individuals in the data used to train AI models and in the deployment of AI systems that affect them.</li>
</ul>



<p>Addressing these ethical considerations requires collaboration between AI developers, policymakers, ethicists, and society at large to create frameworks and guidelines that ensure the responsible use of generative AI.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-are-the-ethical-considerations-for-the-widespread-use-of-generative-ai/">What are the ethical considerations for the widespread use of generative AI?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-are-the-ethical-considerations-for-the-widespread-use-of-generative-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Judge dismisses data privacy suit against University of Chicago and Google</title>
		<link>https://www.aiuniverse.xyz/judge-dismisses-data-privacy-suit-against-university-of-chicago-and-google/</link>
					<comments>https://www.aiuniverse.xyz/judge-dismisses-data-privacy-suit-against-university-of-chicago-and-google/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 11 Sep 2020 08:17:50 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Chicago]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[University]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11509</guid>

					<description><![CDATA[<p>Source: healthcareitnews.com Back in 2019, Healthcare IT News reported on a unique privacy case involving Google and the University of Chicago Medical Center – which had been named as defendants in <a class="read-more-link" href="https://www.aiuniverse.xyz/judge-dismisses-data-privacy-suit-against-university-of-chicago-and-google/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/judge-dismisses-data-privacy-suit-against-university-of-chicago-and-google/">Judge dismisses data privacy suit against University of Chicago and Google</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthcareitnews.com</p>



<p>Back in 2019, Healthcare IT News reported on a unique privacy case involving Google and the University of Chicago Medical Center – which had been named as defendants in a class action suit alleging that they&#8217;d failed to properly de-identify data used for machine learning research and predictive analytics projects.</p>



<p>The suit&#8217;s plaintiff, Daniel Dinerstein, who was a patient at UChicago in 2015, alleged that, while Google and UCMC claimed the medical records used were de-identified, such a claim was &#8220;misleading.&#8221;</p>



<p>Given that the data provided to Google by the university &#8220;included detailed datestamps and copious free-text notes,&#8221; he alleged, the tech giant&#8217;s expertise in data mining and artificial intelligence made it &#8220;uniquely able to determine the identity of almost every medical record the university released.&#8221;</p>



<p>On September 4, Judge Rebecca R. Pallmeyer of the U.S. District Court for the Northern District of Illinois granted the University of Chicago and Google&#8217;s motions to dismiss the suit.</p>



<p>&#8220;Plaintiff suggests that the risk of re-identification was in fact substantial because of the information Google already possesses about individuals through the other services it provides,&#8221; Pallmeyer writes in her decision.</p>



<p>&#8220;Specifically, the amended complaint refers to Google as &#8216;one of the largest and most comprehensive data mining companies in the world, drawing data from thousands of sources and compiling information about individuals’ personal traits (gender, age, sexuality, race), personal habits, purchases, and associations.&#8217; Google has &#8216;create[d] detailed profiles of millions of Americans,&#8217; including public and nonpublic information, and &#8216;possess[es] detailed geolocation information that it can use to pinpoint and match exactly when certain people entered and visited the University’s hospital,&#8217; according to the amended complaint,&#8221; she explained.</p>



<p>&#8220;In fact, for a user of Google applications like Mr. Dinerstein, Google can track the specific University hospital buildings or departments he visited and the time of his visits. Plaintiff alleges that the combination of such geolocation information and the EHRs, which include the date and time of hospital services, &#8216;creates a perfect formulation of data points for Google to identify who the patients in those records really are.&#8217; The amended complaint does not allege, however, that Google has in fact used its extensive data to re-identify any EHRs.&#8221;</p>



<h3 class="wp-block-heading">De-identification, re-identification</h3>



<p>The use of de-identified data has been common for years, of course. But so have challenges around keeping it that way. As far back as 2010, the Office of the National Coordinator for Health IT was studying how to manage the privacy risks presented by health information that had been stripped of personal identifiers – the potential for &#8220;re-identification.&#8221;</p>



<p>The contours of this University of Chicago case are similar in some respects to the so-called &#8220;Project Nightingale&#8221; initiative between Google and Ascension, which got lots of mainstream media attention this past November, amid concerns over how the Mountain View, California, company was using patient data to help inform its design of new AI and machine learning software for Ascension.</p>



<p>In many respects, the collaboration &#8220;is not unlike arrangements that happen every day in America between hospitals and other covered entities and contractors performing services on their behalf,&#8221; Deven McGraw, former deputy director for health information privacy at the HHS Office for Civil Rights and now chief regulatory officer at health data startup Ciitizen, said at the time. &#8220;Many hospitals have hundreds of business associates, all with extensive access to PHI.</p>



<p>But Google isn&#8217;t just any vendor, McGraw acknowledged. It &#8220;has access to so much other data about individuals,&#8221; she said, and therefore understood concerns that &#8220;it may not be possible for data to be truly de-identified in their hands, given all of the data to which they have access.&#8221;</p>



<p>As long as Google &#8220;fulfills its privacy and security obligations under HIPAA with regard to the protected health information provided by Ascension, there is no HIPAA issue on the face of things,&#8221; added healthcare attorney Matthew Fisher, partner at Westborough, Massachusetts-based Mirick, O&#8217;Connell, DeMallie &amp; Lougee. &#8220;However, given the enormous amount of data held by Google, a maybe not so academic question exists of whether data can be de-identified when in Google’s possession.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/judge-dismisses-data-privacy-suit-against-university-of-chicago-and-google/">Judge dismisses data privacy suit against University of Chicago and Google</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/judge-dismisses-data-privacy-suit-against-university-of-chicago-and-google/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</title>
		<link>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/</link>
					<comments>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jul 2020 07:40:06 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[coronavirus]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[FDA]]></category>
		<category><![CDATA[Natural language processing]]></category>
		<category><![CDATA[patient]]></category>
		<category><![CDATA[Safety]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10570</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com Patient safety, data privacy, and health equity are key considerations for the use of chatbots powered by artificial intelligence in healthcare, according to a viewpoint piece published <a class="read-more-link" href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>Patient safety, data privacy, and health equity are key considerations for the use of chatbots powered by artificial intelligence in healthcare, according to a viewpoint piece published in JAMA.</p>



<p>With the emergence of COVID-19 and social distancing guidelines, more healthcare systems are exploring and deploying automated chatbots, the authors noted. However, there are several key considerations organizations should keep in mind before implementing these tools.</p>



<p>“We need to recognize that this is relatively new technology and even for the older systems that were in place, the data are limited,” said the viewpoint&#8217;s lead author, John D. McGreevey III, MD, an associate professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania.</p>



<p>“Any efforts also need to realize that much of the data we have comes from research, not widespread clinical implementation. Knowing that, evaluation of these systems must be robust when they enter the clinical space, and those operating them should be nimble enough to adapt quickly to feedback.”</p>



<p>The authors outlined 12 different focus areas that leaders should consider when planning to implement a chatbot or conversational agent (CA) in clinical care. For chatbots that use natural language processing, the messages these agents send to patients are extremely significant, as are patient’s reactions to them.</p>



<p>“It is important to recognize the potential, as noted in the NAM report, that CAs will raise questions of trust and may change patient-clinician relationships. A most basic question is to what extent CAs should extend the capabilities of clinicians (augmented intelligence) or replace them (artificial intelligence),” the authors said.</p>



<p>“Likewise, determining the scope of the authority of CAs requires examination of appropriate clinical scenarios and the latitude for patient engagement.”</p>



<p>The authors considered the example of someone telling a chatbot something as serious as “I want to hurt myself.” In this case, the patient safety element is brought to the forefront, as someone would need to be monitoring the chatbot often.</p>



<p>This hypothetical situation also raises the question of whether patients would take a response from a chatbot seriously, as well as who is responsible if the chatbot fails in its task.</p>



<p>“Even though technologies to determine mood, tone, and intent are becoming more sophisticated, they are not yet universally deployed in CAs nor validated for most populations,” the authors said.</p>



<p>“Moreover, there is no mention of CAs in the US Food and Drug Administration’s (FDA) proposed regulatory framework for AI or machine learning for software as a medical device nor is there a user’s guide for deploying these platforms in clinical settings.”</p>



<p>The authors also noted that regulatory organizations like the FDA should develop frameworks for appropriate classification and oversight of CAs in healthcare. For example, policymakers could classify CAs as low risk versus higher risk.</p>



<p>“Low-risk CAs might be less automated, structured for a specialized task, and have relatively minor consequences if they fail. A CA that guides patients to appointments might be one such example,” the authors wrote.</p>



<p>“In contrast, higher-risk CAs would involve more automation (natural language processing, machine learning), unstructured, open-ended dialogue with patients, and have potentially serious patient consequences in the event of system failure. Examples of higher-risk CAs might be those that advise patients after hospital discharge or offer recommendations to patients about titrating medications.”</p>



<p>Additionally, the authors noted that in partnerships between vendors and healthcare organizations to use CAs, all should be mindful of converging incentives and work to balance these goals with attention to each of the domains.</p>



<p>“Given the potential of CAs to benefit patients and clinicians, continued innovation should be supported. However, hacking of CA systems (as with other medical systems) represents a cybersecurity threat, perhaps allowing individuals with malicious intent to manipulate patient-CA interactions and even offer harmful recommendations, such as quadrupling an anticoagulant dose,” the authors stated.</p>



<p>The authors stated that ultimately, the successful and effective deployment of chatbots in healthcare will depend on the industry’s ability to assess these tools.</p>



<p>“Conversational agents are just beginning in clinical practice settings, with COVID-19 spurring greater interest in this field. The use of CAs may improve health outcomes and lower costs. Researchers and developers, in partnership with patients and clinicians, should rigorously evaluate these programs,” the authors concluded.</p>



<p>“Further consideration and investigation involving CAs and related technologies will be necessary, not only to determine their potential benefits but also to establish transparency, appropriate oversight, and safety.”</p>



<p>Healthcare leaders will need to ensure they continually evaluate the capacity of these tools to improve care delivery.</p>



<p>“It&#8217;s our belief that the work is not done when the conversational agent is deployed,” McGreevey said. “These are going to be increasingly impactful technologies that deserve to be monitored not just before they are launched, but continuously throughout the life cycle of their work with patients.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Sentara Makes Moves to Deploy Data Platform on Microsoft Azure</title>
		<link>https://www.aiuniverse.xyz/sentara-makes-moves-to-deploy-data-platform-on-microsoft-azure/</link>
					<comments>https://www.aiuniverse.xyz/sentara-makes-moves-to-deploy-data-platform-on-microsoft-azure/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 20 Mar 2020 05:45:01 +0000</pubDate>
				<category><![CDATA[Microsoft Azure Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Blockchain]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[Data Storage]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7578</guid>

					<description><![CDATA[<p>Source: hitinfrastructure.com March 19, 2020 &#8211; Sentara Healthcare, one of the nation&#8217;s oldest not-for-profit health systems, recently announced that it partnered with CitiusTech to implement a next-gen enterprise data platform (EDP). <a class="read-more-link" href="https://www.aiuniverse.xyz/sentara-makes-moves-to-deploy-data-platform-on-microsoft-azure/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/sentara-makes-moves-to-deploy-data-platform-on-microsoft-azure/">Sentara Makes Moves to Deploy Data Platform on Microsoft Azure</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: hitinfrastructure.com</p>



<p>March 19, 2020 &#8211; Sentara Healthcare, one of the nation&#8217;s oldest not-for-profit health systems, recently announced that it partnered with CitiusTech to implement a next-gen enterprise data platform (EDP).</p>



<p>Using Microsoft Partner Citius’s H-Scale solution, Sentara completed the deployment of the EDP to provide a single, consolidated view of provider, payer, and enterprise information from Sentara and Optima Health Plan, a wholly-owned subsidiary.</p>



<p>“Sentara is committed to delivering high-quality healthcare and innovative services that meet the unique needs of the communities we serve,” Michael Reagin, Sentara Healthcare senior vice president and chief information and innovation officer, said in the announcement.</p>



<p>“We collaborated with CitiusTech to develop an EDP that provides us more flexibility and scale to continue meeting the changing demands of our patients, care teams, and partners across the care continuum.”</p>



<p>The end-to-end data management solution will ingest, curate, transform, and reconcile data from five different sources and create a 360-degree view of the patient record. Sentra ensured that the EDP leverages HIPAA compliant PaaS service offerings of Azure, which can scale large data volume to save nearly $1.5 million annually by moving away from an on-premise model, the announcement stated.</p>



<p>“Next-gen interoperability and real-time data access have become imperative for healthcare organizations to enhance quality of care and align with value-based models,” said Rizwan Koita, CEO of CitiusTech. “Sentra Healthcare with its cloud-first strategy has built an industry-leading data platform using CitiusTech’s H-Scale on Microsoft Azure to support its data-driven performance.”</p>



<p>Sentra is now able to aggregate information and generate insights by implementing artificial intelligence (AI) and machine learning models, which may save $3 million a year through efficiency improvements across the organization.</p>



<p>“Microsoft Azure enabled CitiusTech to deliver a cloud-based enterprise-wide healthcare data management solution. This enabled Sentara to get a holistic view of patient information across their enterprise,” said Gareth Hall, director of business strategy for Worldwide Healthcare at Microsoft. “CitiusTech H-Scale, combined with Azure, helps customers achieve scale in the healthcare data management.”</p>



<p>Artificial intelligence and blockchain have become increasingly prominent in the healthcare space to improve data storage and interoperability.</p>



<p>Last year, Sentara partnered with Cigna Health to build, share, and deploy solutions using blockchain technology. The collaboration also involved Aetna, Anthem, Health Care ServiceCorporation (HCSC), PNC Bank, and IBM.</p>



<p>“We came together to create the health utility network realizing the need to improve transparency and interoperability in the industry in order to improve healthcare for all Americans,” said Rajeev Ronanki, chief digital officer of Anthem Inc. “Engaging additional members across partner levels and industry perspectives will increase the network’s reach and ability to deliver high-value solutions.”</p>



<p>Experts believe blockchain can transform healthcare by improving data access across networks and implement new patient-centered delivery models.</p>



<p>“By working together and joining health utility networks as a founding member, we have a significant opportunity to create new efficiencies that will lead to improved whole-person health and wellness outcomes for our customers and clients,” Mark Boxer, executive vice president and chief information officer at Cigna concluded.</p>
<p>The post <a href="https://www.aiuniverse.xyz/sentara-makes-moves-to-deploy-data-platform-on-microsoft-azure/">Sentara Makes Moves to Deploy Data Platform on Microsoft Azure</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/sentara-makes-moves-to-deploy-data-platform-on-microsoft-azure/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Challenges of Artificial Intelligence Adoption in Healthcare</title>
		<link>https://www.aiuniverse.xyz/challenges-of-artificial-intelligence-adoption-in-healthcare/</link>
					<comments>https://www.aiuniverse.xyz/challenges-of-artificial-intelligence-adoption-in-healthcare/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 15 Feb 2020 06:33:51 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Access to Care]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[Network Security]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6795</guid>

					<description><![CDATA[<p>Source: hitinfrastructure.com February 14, 2020 &#8211; Artificial Intelligence (AI) adoption is gradually becoming more prominent in health systems, but 75 percent of healthcare insiders are concerned that AI could <a class="read-more-link" href="https://www.aiuniverse.xyz/challenges-of-artificial-intelligence-adoption-in-healthcare/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/challenges-of-artificial-intelligence-adoption-in-healthcare/">Challenges of Artificial Intelligence Adoption in Healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: hitinfrastructure.com</p>



<p>February 14, 2020 &#8211; Artificial Intelligence (AI) adoption is gradually becoming more prominent in health systems, but 75 percent of healthcare insiders are concerned that AI could threaten the security and privacy of patient data, according to a recent survey from KPMG.  </p>



<p>Although 91 percent of healthcare respondents believe that AI implementation is increasing patient access to care, the survey of 751 US business decision makers uncovered. The survey explored the barriers and challenges that have the potential to hamper the integration of AI technologies in healthcare organizations.</p>



<p>Healthcare security is a top concern for insiders with 75 percent responding that they believe AI could threaten patient data privacy. But 86 percent of respondents said their organizations are taking steps to protect patient privacy as it implements AI. </p>



<p>Organizations believe that a broad understanding of AI and talent in the space are musts to ensure success, but many insiders reported major challenges in these areas.&nbsp;</p>



<p>Despite this, only 47 percent of healthcare insiders responded that their organizations offer AI training courses to employees. While only 67 percent said their employees support AI adoption, the lowest ranking of any industry.&nbsp;</p>



<p>“Comprehending the full range of AI technology, and how best to apply it in a healthcare setting, is a learned skill that grows out of pilots and tests. Building an AI-ready workforce requires a wholesale change in the approach to training and how to acquire talent. Having people who understand how AI can solve big, complex problems is critical,” Melissa Edwards, managing director and digital enablement at KPMG said in the survey.&nbsp;</p>



<p>Cost is a major barrier for organizations as well. Successful AI implementation requires a large investment, which means that organizations who are already feeling budget-burned may be slower to fund AI.&nbsp;</p>



<p>Thirty-seven percent of healthcare industry executives reported that the pace in which they are implementing AI is too slow.&nbsp;</p>



<p>But Edwards highlighted that the pace has actually greatly increased in the past few years.&nbsp;</p>



<p>“The pace with which hospital systems have adopted AI and automation programs has dramatically increased since 2017,” she said.” Virtually, all major healthcare providers are moving ahead with pilots or programs in these areas. The medical literature is showing support of AI’s power as a tool to help clinicians.”</p>



<p>Fifty-four percent of executives voiced that to date, AI has increased the overall cost of healthcare. “The question is, ‘Where do I put my AI efforts to get the greatest gain for the business? Trying to assess what ROI will look like is a very relevant point as they embark on their AI journey,” Edward said.&nbsp;</p>



<p>Last year, The White House called for more transparency and “explainability” in healthcare AI through the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update. </p>



<p>The plan identified eight strategic priorities for federally-funded AI research including to prioritize investments in the next generation of AI that will drive discovery and insight and enable the US to remain a leader in AI and develop effective methods for human-AI collaboration.&nbsp;</p>



<p>The plan also included:&nbsp;&nbsp;</p>



<ul class="wp-block-list"><li>Addressing the ethical, legal, and societal implications of AI</li><li>Ensuring the safety and security of AI systems</li><li>Developing shared public datasets and environments for AI training and testing</li><li>Evaluating AI technologies using standards and benchmarks</li><li>Understanding the national AI R&amp;D workforce needs</li><li>Expanding public-private partnerships to accelerate advances in AI</li></ul>



<p>“AI technologies are critical for addressing a range of long-term challenges, such as constructing advanced healthcare systems, a robust intelligent transportation system, and resilient energy and telecommunication networks,” the plan concluded.</p>
<p>The post <a href="https://www.aiuniverse.xyz/challenges-of-artificial-intelligence-adoption-in-healthcare/">Challenges of Artificial Intelligence Adoption in Healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/challenges-of-artificial-intelligence-adoption-in-healthcare/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Duality Technologies raises $16 million for privacy-preserving data science solutions</title>
		<link>https://www.aiuniverse.xyz/duality-technologies-raises-16-million-for-privacy-preserving-data-science-solutions/</link>
					<comments>https://www.aiuniverse.xyz/duality-technologies-raises-16-million-for-privacy-preserving-data-science-solutions/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 31 Oct 2019 09:03:10 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4951</guid>

					<description><![CDATA[<p>Source: venturebeat.com Newark, New Jersey-based Duality Technologies, a provider of privacy-enhancing data science solutions, today announced that it’s raised $16 million in a series A round led <a class="read-more-link" href="https://www.aiuniverse.xyz/duality-technologies-raises-16-million-for-privacy-preserving-data-science-solutions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/duality-technologies-raises-16-million-for-privacy-preserving-data-science-solutions/">Duality Technologies raises $16 million for privacy-preserving data science solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Newark, New Jersey-based Duality Technologies, a provider of privacy-enhancing data science solutions, today announced that it’s raised $16 million in a series A round led by Intel Capital, with participation from Hearst Ventures and existing investor Team 8. Duality previously raised $4 million in a November 2018 round, which together with this latest tranche brings its total raised to about $20 million.</p>



<p>Cofounder and CEO Alon Kaufman said that Duality will leverage the fresh funding to continue developing its secure computing platform and to expand into new segments. To this end, it recently collaborated with Intel to explore the challenges of AI workloads using encryption, which informed efforts like the open source HE-Transformer backend for Intel’s n Graph neural network compiler.</p>



<p>“AI and Machine Learning are transforming countless industries, but they have also created new privacy challenges that regulation alone can’t solve,” said Kaufman. “We are excited by the investment of Intel Capital, Hearst Ventures. and Team8 in Duality, and look forward to collaborating with these industry leaders in delivering innovative privacy-enhanced solutions to the market. Our mission is to reconcile data utility and privacy while unlocking a whole new world of secure collaborative business opportunities for our customers.”</p>



<p>Duality keeps a low profile but deals principally in homomorphic encryption, a form of cryptography that enables computation on plaintext (file contents) encrypted using an algorithm (also known as ciphertexts). It generates an encrypted result that when decrypted exactly matches the result of operations that would have been performed on unencrypted text.</p>



<p>Duality’s Secure Plus offering enables multiple parties to collaborate without exposing their data or analytics models. Data remains protected end-to-end even when analyzed in untrusted cloud environments, courtesy “quantum-resistant” technologies that conform to the standards laid out by the homomorphic encryption industry consortium.</p>



<p>Duality pitches the platform as a privacy-preserving solution for “numerous” enterprises, particularly those in regulated industries. Banks using SecurePlus can conduct privacy-enhanced financial crime investigations across institutions, the company says, while scientists can tap it to collaborate on research involving patient records. Even retailers stand to benefit with privacy-preserving data supply chain schemes enabled by homomorphic encryption.</p>



<p>“Intel Capital has been following the space closely, and we are excited to see secure computing and homomorphic encryption becoming practical and broadly applicable,” said Intel Capital vice president and senior managing director Anthony Lin. “We believe privacy-preservation in AI and ML represents a huge market need, and we’re investing in Duality because of its unique founding team and world-leading expertise in both advanced cryptography and data science.”</p>



<p>Hearst Ventures senior managing director Kenneth Bronfin added, “As a leading global, diversified media, information and services company with more than 360 businesses across industries, we are acutely aware of the increasing importance of data and data collaboration in companies across many market segments. Sensitive data is constantly being generated by both individuals and businesses; there needs to be technology available that protects such data while allowing us to extract insights.”</p>



<p>Duality was confounded in 2016 by Kaufman, chairwoman Rina Shainski, Turing Award-winning professor Shafi Goldwasser, MIT professor Vinod Vaikuntanathan, and open source pioneer Dr. Kurt Rohloff. Vaikuntanathan is the co-inventor of the foundation BGV homomorphic encryption scheme, and Rohloff is the founder of the PALISADE homomorphic encryption open source library on which Duality’s platform is based.</p>
<p>The post <a href="https://www.aiuniverse.xyz/duality-technologies-raises-16-million-for-privacy-preserving-data-science-solutions/">Duality Technologies raises $16 million for privacy-preserving data science solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/duality-technologies-raises-16-million-for-privacy-preserving-data-science-solutions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Facebook to End Targeted Ads Built with Third-Party Data Mining</title>
		<link>https://www.aiuniverse.xyz/facebook-to-end-targeted-ads-built-with-third-party-data-mining/</link>
					<comments>https://www.aiuniverse.xyz/facebook-to-end-targeted-ads-built-with-third-party-data-mining/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 30 Mar 2018 05:16:36 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Targeted Ads]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2165</guid>

					<description><![CDATA[<p>Source &#8211; pcmag.com The fallout from the Cambridge Analytica controversy has triggered Facebook to cancel an advertising tool that pulled data from people&#8217;s backgrounds, like whether you own <a class="read-more-link" href="https://www.aiuniverse.xyz/facebook-to-end-targeted-ads-built-with-third-party-data-mining/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-to-end-targeted-ads-built-with-third-party-data-mining/">Facebook to End Targeted Ads Built with Third-Party Data Mining</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; pcmag.com</p>
<p>The fallout from the Cambridge Analytica controversy has triggered Facebook to cancel an advertising tool that pulled data from people&#8217;s backgrounds, like whether you own a home or what products you like to buy.</p>
<p>&#8220;We want to let advertisers know that we will be shutting down Partner Categories,&#8221; Facebook said on Wednesday. &#8220;This product enables third party data providers to offer their targeting directly on Facebook.&#8221;</p>
<div class="row pcm-content"></div>
<p>These third-party providers include Acxiom and Experian, which specialize in mining data on US consumers that can be rented out for marketing purposes. Information about your ethnicity, marital status, whether you own a car, the kinds of purchases you make, and how much you spend on them can all be logged.</p>
<p>The data mining certainly sounds creepy, but it&#8217;s also legal and standard practice in the marketing world. Acxiom, for instance, pulls the information from public records, consumer surveys, and other commercial entities that managed to collect your information with your consent.</p>
<p dir="ltr"><img decoding="async" class="740" src="https://assets.pcmag.com/media/images/580114-facebook-partner-categories.png?thumb=y&amp;width=980&amp;height=356" alt="Facebook Partner Categories" border="0" /></p>
<p>Facebook decided to let its own advertisers harness the power of these data brokers with its Partner Categories over on its ad platform. But no more. The company is phasing out the tool, amid the growing backlash over the social media giant&#8217;s privacy practices.</p>
<p>&#8220;We believe this step, winding down over the next six months, will help improve people&#8217;s privacy on Facebook,&#8221; the company said.</p>
<div class="row pcm-content"></div>
<p>The social media giant&#8217;s privacy practices have been under the microscope ever since news emerged that a UK political consultancy called Cambridge Analytica managed to pull the personal data from 50 million Facebook users. It did so with the help of a third-party app that surveyed Facebook users, by not only collecting their data, but also vacuuming information on their Facebook friends.</p>
<p>In response, Facebook is revamping its privacy practices, and Wednesday&#8217;s move to end the advertising tool represents another step. Marketers might not like the decision, even as the platform still offers a variety of tools to create targeted ads. But the social networking service is facing a growing #Deletefacebook movement, along with the threat of possible government regulation on data privacy, both of which could derail Facebook&#8217;s business.</p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-to-end-targeted-ads-built-with-third-party-data-mining/">Facebook to End Targeted Ads Built with Third-Party Data Mining</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/facebook-to-end-targeted-ads-built-with-third-party-data-mining/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Microsoft to make artificial intelligence ‘available to all’</title>
		<link>https://www.aiuniverse.xyz/microsoft-to-make-artificial-intelligence-available-to-all/</link>
					<comments>https://www.aiuniverse.xyz/microsoft-to-make-artificial-intelligence-available-to-all/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 29 Mar 2018 05:52:33 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[Microsoft]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2161</guid>

					<description><![CDATA[<p>Source &#8211; thehindu.com Artificial intelligence (AI) is no longer going to remain the secret sauce of giant tech companies. Microsoft said that it was betting big on ‘democratising’ <a class="read-more-link" href="https://www.aiuniverse.xyz/microsoft-to-make-artificial-intelligence-available-to-all/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-to-make-artificial-intelligence-available-to-all/">Microsoft to make artificial intelligence ‘available to all’</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>thehindu.com</strong></p>
<p>Artificial intelligence (AI) is no longer going to remain the secret sauce of giant tech companies. Microsoft said that it was betting big on ‘democratising’ artificial intelligence and making it ‘available to all’ to help improve lives and transform businesses.</p>
<p>Detecting deadly diseases, reducing accident risks, predicting consumer behaviour and helping farmers increase crop yields were some of the AI-based innovations that the world’s largest software maker showcased along with its partners at an event here. “Is AI a product? Is it something that would be really harnessed by some of the biggest companies in the world? Who is AI for?,” said Anant Maheshwari, president, Microsoft India, in his keynote address at the company’s ‘AI for All’ conference here. “And one of the core perspectives we have had as a company is to democratise AI and really bring it for everyone,” he said.</p>
<p><strong>650 India partners</strong></p>
<p>The Redmond, Washington-based firm said it was helping 650 India-based partners use the Microsoft cognitive services, Internet of Things (IoT), AI and machine learning platforms to build solutions for India. Over the last year, Microsoft and its partners have deployed AI solutions in areas such as healthcare, education, agriculture, retail, e-commerce, manufacturing and financial services.</p>
<p>For instance, homegrown online retail giant Flipkart is leveraging AI and analytics capabilities in Microsoft’s cloud computing platform Azure. It is able to deliver increasingly relevant and personalized experiences to its customers.</p>
<p>“We have to be proactive and these millions of users coming to our platform is giving us information to predict. We can actually predict demand,” said Amar Nagaram, vice-president, engineering, Flipkart, which competes with Amazon, the world’s largest online retailer. He also said the company does not dump everything in the search results of the consumers but provides them with the products that they would like and actually buy.</p>
<p>Homegrown ride-hailing firm Ola’s connected car platform Ola Play would also leverage Microsoft AI and IoT technology to improve the driver and passenger experiences. The telematics platform will transform the car into a high-performing, intelligent vehicle, capable of assessing fuel efficiency, engine performance, and driver performance. It will also enable smarter navigation and predict breakdowns, enhancing safety and security. “We know every single street at least in tier one cities and every time a pothole opens up we know that as well,” said Kaushik Mukherjee, senior director, engineering, Ola. “That data is hugely useful when we talk about infrastructure, road conditions and accident-prone [and] dimly lit areas,” he said.</p>
<p><strong>Avoid blindness</strong></p>
<p>Microsoft also unveiled a partnership with Forus Health, a Bengaluru-based start-up focussed on retinal imaging devices. It would leverage AI capabilities for early detection of eye diseases such as diabetic retinopathy, glaucoma and macular degeneration, and help reduce avoidable blindness.</p>
<p>“AI is going to be very important because when you can give the response to somebody [patient] immediately, the chances of he taking the whole thing more seriously is very high,” said K. Chandrasekhar, co-founder and CEO of Forus. He said that India had only 20,000 ophthalmologists for a population of 1.3 billion.</p>
<p><strong>Data privacy</strong></p>
<p>Microsoft said though we are entering into the AI-world with all the ingredients like computing power, cloud and big data coming together, the company takes very seriously the consequences that introducing AI can have. “We don’t want to just unleash it, we want to be very mindful of it,” said Peggy Johnson, executive vice president, business development, Microsoft. Referring to the set of principles introduced by Microsoft CEO Satya Nadella, she said while developing AI, one must ensure there is no bias in the data sets and if there is any, it should be removed. “Otherwise you are going to get an answer that is not really based on truth,” she said. Ms. Johnson said the AI-based technology has to be reliable, safe and most importantly provide transparency and data privacy to the users.</p>
<p>“As AI is introduced in our daily lives, we want people to know what is going on with their data,” said Ms. Johnson. “If anything should go wrong, you can’t say ‘oh the machine did it’ that just is not an acceptable answer. You really have to take accountability for it.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-to-make-artificial-intelligence-available-to-all/">Microsoft to make artificial intelligence ‘available to all’</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microsoft-to-make-artificial-intelligence-available-to-all/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>6 big data privacy practices every company should adopt in 2018</title>
		<link>https://www.aiuniverse.xyz/6-big-data-privacy-practices-every-company-should-adopt-in-2018/</link>
					<comments>https://www.aiuniverse.xyz/6-big-data-privacy-practices-every-company-should-adopt-in-2018/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 03 Oct 2017 07:05:14 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[cloud services]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[IT]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1319</guid>

					<description><![CDATA[<p>Source &#8211; techrepublic.com Issues surrounding data privacy are as legally unresolved today as they were two years ago, but the recent Equifax breach now puts a clear focus on them that <a class="read-more-link" href="https://www.aiuniverse.xyz/6-big-data-privacy-practices-every-company-should-adopt-in-2018/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/6-big-data-privacy-practices-every-company-should-adopt-in-2018/">6 big data privacy practices every company should adopt in 2018</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; techrepublic.com</p>
<p>Issues surrounding data privacy are as legally unresolved today as they were two years ago, but the recent Equifax breach now puts a clear focus on them that strikes fear into the hearts of CIOs.</p>
<p>The Equifax data that was breached was not big data. However, big data is a major privacy concern for IT because so much of it is coming into enterprise data repositories from so many sources; and it comes in many shapes and sizes.</p>
<p>After Equifax, CIOs can rest assured that their CEOs and boards will be following their work in data privacy closely—and big data is one of the areas they&#8217;ll be most concerned about.</p>
<p>What operational steps can IT take to assure at a grass root level that sound data privacy practices are employed for their big data?</p>
<h2>1. Continuously vet your big data cloud-based vendors for data privacy</h2>
<p>Many cloud vendors can provide the levels of privacy and security that you want for your big data—but you have to demand and be willing to pay for it. Never assume that by default your cloud vendor will automatically apply best practices. Your staff should carefully evaluate the privacy protections that each of your big data cloud vendors offers and determine whether these data protection levels meet your own internal governance standards. If a cloud vendor&#8217;s data privacy practices don&#8217;t meet your own governance standards, pass on the vendor. Also ask your external IT auditors to review all cloud-based vendor data protection and security practices as part of the IT audits that the auditors perform for your company. Vendor data protection and security levels should minimally be checked on an annual basis.</p>
<h2>2. Use private clouds</h2>
<p>Most public cloud vendors offer private cloud services, too. Placing your data in a private cloud is more expensive than being a multi-tenant customer in a public cloud, but the private cloud deployment better separates your organization&#8217;s data from that of others. Cloud-wise, it is the next best thing to keeping your data on premises.</p>
<h2>3. Anonymize data</h2>
<p>You can the protect the data privacy of your customers and still perform critical trends analysis. One way that this anonymizationcan be accomplished is by encrypting data elements that personally identify someone. Another way is by identifying data from individuals with similar values (let&#8217;s say that the value you are are measuring is income) and then averaging them into a composite income value that gets pulled into a larger data analysis. Other methods are data redaction or masking.</p>
<h2>4. Locate all the big data enclaves in your company and vet these for data privacy</h2>
<p>As organizations distribute big data throughout departments and business units, there is always a risk that the data held within departments is changed so that data privacy levels are no longer met. The department responsible for big data governance and administration should regularly identify and track the big data marts that are distributed throughout the company. These localized big data marts should also be periodically audited by external IT auditors for data privacy compliance. If business units and other non-IT departments are using cloud-based services, the data privacy practices of their vendors should be verified for compliance to corporate standards. Cases of non-compliance should be immediately documented and mitigated.</p>
<h2>5. Set your sights on GDPR</h2>
<p>If you&#8217;re a North American company and you aren&#8217;t doing business internationally, you might not immediately have to concern yourself with the European Union&#8217;s General Data Protection Regulation (GDPR).</p>
<p>The GDPR, which aims for more stringent protections of individuals&#8217; data, goes into effect in May 2018. According to a Gartner prediction, over 50% of companies affected by GDPR will not have met its requirements by 2018. The fines for non-compliance are hefty &#8211; up to 4% of annual revenue.</p>
<p>Keeping GDPR in sight matters because even if your company doesn&#8217;t do business in Europe today, it might in the future; and GDPR is where data privacy practices are headed in the future. If you comply with it now, you&#8217;re ahead of the game.</p>
<h2>6. Perform social engineering audits</h2>
<p>It&#8217;s the dark side of IT, but the reality is: employee sabotage of critical data happens, as does inadvertent and sometimes purposeful inappropriate data sharing between employees and with individuals outside of the organization. All are reasons to include a social engineering audit along with your annual IT audit when your external auditor arrives. A social engineering audit looks for phishing attacks, phone and physical entry attacks and other types of technical and social deception that can often be traced back to your own employees. You can uncover potential areas of vulnerability, and also use the audit as means of identifying the types of employee training that could be helpful.</p>
<p>The post <a href="https://www.aiuniverse.xyz/6-big-data-privacy-practices-every-company-should-adopt-in-2018/">6 big data privacy practices every company should adopt in 2018</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/6-big-data-privacy-practices-every-company-should-adopt-in-2018/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
	</channel>
</rss>
