<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>maps Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/maps/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/maps/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 05 May 2020 06:49:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>New Research Maps Wide Gulf Between GEOINT AI Potential, Readiness</title>
		<link>https://www.aiuniverse.xyz/new-research-maps-wide-gulf-between-geoint-ai-potential-readiness/</link>
					<comments>https://www.aiuniverse.xyz/new-research-maps-wide-gulf-between-geoint-ai-potential-readiness/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 05 May 2020 06:49:09 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[GEOINT]]></category>
		<category><![CDATA[maps]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8576</guid>

					<description><![CDATA[<p>Source: meritalk.com The geospatial intelligence (GEOINT) industry and workforce is undergoing a seismic shift driven by artificial intelligence (AI), with 91 percent of stakeholders believing AI has <a class="read-more-link" href="https://www.aiuniverse.xyz/new-research-maps-wide-gulf-between-geoint-ai-potential-readiness/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/new-research-maps-wide-gulf-between-geoint-ai-potential-readiness/">New Research Maps Wide Gulf Between GEOINT AI Potential, Readiness</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: meritalk.com</p>



<p>The geospatial intelligence (GEOINT) industry and workforce is undergoing a seismic shift driven by artificial intelligence (AI), with 91 percent of stakeholders believing AI has the potential to greatly improve GEOINT productivity, capacity, and capability. That potential impact, however, appears to be racing ahead of on-the-ground planning as just 33 percent of GEOINT stakeholders report having a clear AI workforce strategy.</p>



<p>Today, MeriTalk released the “Mapping AI to the GEOINT Workforce” report in collaboration with the United States Geospatial Intelligence Foundation (USGIF). The report – underwritten by Intel, MFGS, Inc., Microsoft, and Recorded Future – reveals both the opportunities and challenges AI presents across Federal, state, local, and higher education landscapes.</p>



<p>“With AI being just announced by NGA as one of the five Tech Areas, this is a very timely study that will help inform the future of our work. Embracing the third wave of AI, USGIF and its partner academic institutions have been incorporating data mining, analysis, fusion and processing into their curricula in addition to the already taught GIS, Remote Sensing and Data Visualization competencies,” said Dr. Camelia Kantor, Vice President of Academic Affairs, USGIF.</p>



<p><strong>Recognizing the Roadblock: Skills</strong></p>



<p>The research report’s findings come from MeriTalk’s survey of 150 Federal, state and local (SLG), and higher education GEOINT stakeholders. While respondents agree that human-machine teaming will be the industry’s new normal within just five years, the GEOINT workforce has half (or fewer) of the skills needed to actualize AI benefits.</p>



<p>Steve O’Keeffe, founder of MeriTalk, commented, “If you’re not hip to AI, you’re DOA in GEOINT. We can choose to ride the new wave, or get drowned by it. Good time to learn to swim.”</p>



<p>The report identifies significant hurdles standing in the way of organizations fully capitalizing on AI in the workforce:</p>



<ul class="wp-block-list"><li>The majority (57 percent) say addressing the skills gap will be their biggest challenge over the next decade;</li><li>Fifty-one percent say security concerns will be a complication; and</li><li>Less than half (41 percent) say lack of funding is a concern as stakeholders prepare to accelerate AI technology</li></ul>



<p><strong>Waking up the Workforce</strong></p>



<p>Organizations are beginning to recognize the importance of implementing a formal AI workforce strategy. Just one-third of respondents report having a strategy in place today. However, 54 percent share that they are working on one. Size appears to matter, as Feds are significantly more likely to have a formal strategy (46 percent) compared to SLG (23 percent).</p>



<p>Additionally, organizations are investing in the workforce – skills development, training, hiring, and more. Higher education organizations (76 percent) are currently investing in AI skills, at about the same rate as Fed (73 percent), and outpacing SLG (53 percent).</p>



<p>These investments are essential to maximize AI potential. Stakeholders predict that GEOINT-related AI will have the greatest impact on national security, emergency response and natural disaster aid, and urban planning and development.</p>



<p><strong>Plotting the Future of AI</strong></p>



<p>Looking ahead over the next 12-18 months, 42 percent of respondents say they will prioritize data visualization and data mining, analysis, fusion, and processing. In addition, 46 percent say predictive modeling will be priority.</p>



<p>GEOINT stakeholders envision an AI-ready workforce that’s collaborative, data-driven, and diverse in the next 10 years.</p>



<p>To realize this vision, the report suggests GEOINT leaders increase collaboration with universities on AI curriculum, and work to integrate AI skills into new and existing workforce training.</p>



<p>For more information view the full report.</p>



<p>MeriTalk’s Director of Research will host expert panelists from government, industry, and academia during a virtual panel on the research results on Wednesday, May 27. Register for the virtual event.</p>
<p>The post <a href="https://www.aiuniverse.xyz/new-research-maps-wide-gulf-between-geoint-ai-potential-readiness/">New Research Maps Wide Gulf Between GEOINT AI Potential, Readiness</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/new-research-maps-wide-gulf-between-geoint-ai-potential-readiness/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google’s AI maps fruit fly brains</title>
		<link>https://www.aiuniverse.xyz/googles-ai-maps-fruit-fly-brains/</link>
					<comments>https://www.aiuniverse.xyz/googles-ai-maps-fruit-fly-brains/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 06 Aug 2019 11:10:09 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[brains]]></category>
		<category><![CDATA[HHMI]]></category>
		<category><![CDATA[maps]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4297</guid>

					<description><![CDATA[<p>Source: venturebeat.com Google’s latest research dives deep into the brains of fruit flies — quite literally. In collaboration with the Howard Hughes Medical Institute (HHMI) Janelia Research <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-ai-maps-fruit-fly-brains/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-maps-fruit-fly-brains/">Google’s AI maps fruit fly brains</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Google’s latest research dives deep into the brains of fruit flies — quite literally. In collaboration with the Howard Hughes Medical Institute (HHMI) Janelia Research Campus and Cambridge University, the tech giant today published the results of a study (“Automated Reconstruction of a Serial-Section EM Drosophila Brain with Flood-Filling Networks and Local Realignment“) that explores the automated reconstruction of an entire fly’s brian neuron by neuron.</p>



<p>It’s a spiritual follow-up to a paper published in the journal&nbsp;<em>Cell</em>&nbsp;by researchers at Janelia Research Campus. The team involved in&nbsp;<em>that</em>&nbsp;study infused a fly brain’s cells and synapses with heavy metals to mark the outlines of each neuron and its connections. To generate images, they hit&nbsp;approximately 7,062 brain slices with a beam of electrons, which passed through everything except the metal-loaded parts.</p>



<p>The coauthors of this most recent paper expect that their work in connectomics — the production and study of connectomes, or comprehensive maps of connections within an organism’s nervous system — will accelerate investigations at HHMI and Cambridge University into learning, memory, and perception in the fly brain. In the spirit of open source, they’ve made the full results available for download and browsable online using Neuroglancer, an in-house interactive 3D interface.</p>



<p>lies in the genus&nbsp;<em>drosophila&nbsp;</em>weren’t an arbitrary target. As several of the paper’s coauthors note in an accompanying blog post, fly brains are relatively small (one hundred thousand neurons) compared to, say, frog brains (over 10 million neurons), mouse brains (100 million neurons), octopus brains (half a billion neurons), or human brains (100 billion neurons). That makes them easier to study “as a complete circuit,” they say.</p>



<p>Plotting out a fly brain required first sectioning it into thousands of ultra-thin 40-nanometer slices, which were imaged using a transmission electron microscope and aligned into a 3D image volume of the entire brain. Next, thousands of Cloud tensor processing units (TPUs) — AI accelerator chips custom-designed by Google — ran a special class of algorithm called flood-filling networks (FFNs) designed for instance segmentation of complex and large shapes, particularly volume data sets of tissue. Over time, the FFNs automatically traced each individual neuron in the fly brain.</p>



<p>Reconstruction didn’t go off without a hitch; the FFNs performed poorly when image content in consecutive sections wasn’t stable or when multiple consecutive slices were missing (due to challenges associated with the sectioning and imaging process). To mitigate dips in precision and accuracy, the team estimated the slice-to-slice consistency in the 3D brain image and locally stabilized the content while the FFNs highlighted each neuron. Additionally, they used an AI model dubbed Segmentation-Enhanced CycleGAN (SECGAN) — a type of generative adversarial network specialized for segmentation — to computationally fill in missing slices in the image volume. With the two new procedures in place, they found that FFNs were able to trace through locations with multiple missing slices “much more robustly.”</p>



<p>With the brain fully imaged, the team tackled the problem of visualization with the aforementioned Neuroglancer, which is available in open source and currently in use by collaborators at the Allen Institute for Brain Science, Harvard University, HHMI, Max Planck Institute, MIT, Princeton University, and elsewhere. It’s based on WebGL and supported in newer versions of Chrome and Firefox, and it exposes a four-pane view consisting of three orthogonal cross-sectional views as well as a view (with independent orientation) that displays 3D models for selected objects.</p>



<p>In addition to enabling the viewing of petabyte-scale 3D volumes, Nueroglancer supports features like arbitrary-axis cross-sectional reslicing, line-segment-based models, multi-resolution meshes, and the ability to develop custom analysis workflows via integration with Python. Moreover, it’s able to ingest data via HTTP in a range of formats including BOSS, DVID, Render, precomputed chunk and mesh fragments, single NIfTI files, Python in-memory volumes, and N5.</p>



<p> The paper’s coauthors note that their brain image isn’t perfect, because it still contains some errors and skips over the identification of synapses. But they expect that advances in segmentation methodology will yield further improvements in reconstruction, and they say that they’re working with Janelia Research Campus’ FlyEM team to create a “highly verified” and “exhaustive” fly brain connectome using images acquired with focused ion beam scanning electron microscopy technology. </p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-maps-fruit-fly-brains/">Google’s AI maps fruit fly brains</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-ai-maps-fruit-fly-brains/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Semi-Supervised Machine Learning Makes High-Resolution Maps Possible For Humanitarian Aid</title>
		<link>https://www.aiuniverse.xyz/semi-supervised-machine-learning-makes-high-resolution-maps-possible-for-humanitarian-aid/</link>
					<comments>https://www.aiuniverse.xyz/semi-supervised-machine-learning-makes-high-resolution-maps-possible-for-humanitarian-aid/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 10 Jun 2019 10:41:52 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Aid]]></category>
		<category><![CDATA[High-Resolution]]></category>
		<category><![CDATA[Humanitarian]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[maps]]></category>
		<category><![CDATA[Possible]]></category>
		<category><![CDATA[Semi-Supervised]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3692</guid>

					<description><![CDATA[<p>Source:- analyticsindiamag.com According to The Overseas Development Institute, a London-based research establishment, whose findings were released in April 2009 in the paper “Providing aid in insecure environments:2009 Update”, the most <a class="read-more-link" href="https://www.aiuniverse.xyz/semi-supervised-machine-learning-makes-high-resolution-maps-possible-for-humanitarian-aid/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/semi-supervised-machine-learning-makes-high-resolution-maps-possible-for-humanitarian-aid/">Semi-Supervised Machine Learning Makes High-Resolution Maps Possible For Humanitarian Aid</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- analyticsindiamag.com</p>
<p>According to The Overseas Development Institute, a London-based research establishment, whose findings were released in April 2009 in the paper “Providing aid in insecure environments:2009 Update”, the most lethal year in the history of humanitarianism was 2008, in which 122 aid workers were murdered and 260 assaulted.</p>
<p>As much as the underprivileged and unfortunate parts of society need aid in human form, it is equally important to establish a certain sense of security for uninterrupted services.</p>
<p>This aid usually has to reach places of great geographical inconvenience. This can be mainly attributed to war zones, the refugee crisis or lack of bare minimum natural resources. Detailed maps are important to help aid-workers and organisations to plan their logistics and mobilize relief around the world.</p>
<p>Though the options that existing maps offer are decent, they don’t give away much of the information at the ground level. For example, the population density of a location is important for the aid-workers to plan the infrastructure in advance so as not to fall short of food or medicine after reaching the affected areas. The census made available by the local authority in remote locations can’t be up to date.</p>
<p>Researchers at Facebook AI, propose a weakly and semi-supervised machine learning model to build high-resolution maps for the NGOs and other humanitarian organisations.</p>
<h3>Targeting  Road And Building Detection</h3>
<p>In this paper, the team at Facebook focusses on mapping roads and building to help the aid workers.</p>
<p>For building detection, they used a combination of weakly-supervised and semi-supervised training techniques in conjunction with the freely available data in Open- StreetMap(OSM), the researchers were able to locate buildings in high-resolution satellite imagery.</p>
<p>The idea behind using this combination of weakly supervised learning techniques in</p>
<p>conjunction with simple heuristics is to train a semantic segmentation model for road extraction on noisy and never pixel-perfect training data from OSM.</p>
<p>“Most available datasets for road segmentation are  heavily biased towards particular regions,” wrote the team in their paper titled Building High Resolution Maps for Humanitarian Aid and Development with Weakly- and Semi-Supervised Learning.</p>
<p>To ensure unbiased and accurate road mapping, a threshold is used on the number of roads mapped in a particular area to find areas that are more completely mapped; this data is then used to train a weakly supervised road segmentation problem.</p>
<p><b>Data collection challenges:</b></p>
<ul>
<li>The correctness of the data and over-representation of developed world maps in the existing datasets</li>
<li>Ensuring the correspondence between OSM tags to the data; both temporally and spatially.</li>
<li>OSM tagged features are precise but have an extremely low recall.</li>
</ul>
<p><b>Creating the dataset:</b></p>
<p>The team started with a seed dataset of around 1 million labeled images and using weakly and semi-supervised techniques, a dataset of more than 100 million labeled training images are generated.</p>
<p>The above figure illustrates the road extraction from satellite imagery in rural Mexico.  Left: Satellite Imagery. Middle: THA/IND/IDN trained model. Right: Global OSM trained model.</p>
<p>The model trained on DeepGlobe data misses the road in the top left almost entirely and leaves several roads in the middle of dense trees whereas the globally trained model performs well.</p>
<p>For every 100 automated labeled images created in the dataset, one image is manually labeled.</p>
<p>To keep the dataset generation simple, each edge of the road vector is converted to 5 pixel width lines. The model learns to predict roads that match the more complex twists and turns of the roads.</p>
<p>For mapping the buildings, a semi-supervised bootstrapping approach is implemented to restrict the error rate of non-building labels to below 1 per cent. And, for every labeled house pulled from a given region, an equal number of non-houses are randomly sampled, creating a dataset with a 50-50 building/non-building split.</p>
<p>Accounting for non-building is important as the existing dataset with no building regions might not have been originally mapped.</p>
<p>This work aims to tune the existing datasets and models that work well at the regional level but falter at the global scale. By paying more attention to road segmentation and building detection, the team at Facebook demonstrates that this model outperforms others trained on existing datasets.</p>
<p><strong>Conclusion</strong></p>
<p>These maps are already having a real-world impact. For example, the population density map produced for Malawi enabled the Red Cross to quickly and remotely map around  1 million houses and 120,000 km of roads for a measles and rubella immunization campaign.</p>
<p>This method of generating road vectors also came in handy last year during the Kerala floods when the existing mapping methods failed to aid the humanitarian workers effectively.</p>
<p>The datasets resulting from this work will be released as an update to the HRSL. The release will be done region by region as inter-disciplinary experts are involved ensure that the potential for misuse and abuse of this data is minimized and the accuracy of the resulting datasets meet the standards for release.</p>
<p>The post <a href="https://www.aiuniverse.xyz/semi-supervised-machine-learning-makes-high-resolution-maps-possible-for-humanitarian-aid/">Semi-Supervised Machine Learning Makes High-Resolution Maps Possible For Humanitarian Aid</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/semi-supervised-machine-learning-makes-high-resolution-maps-possible-for-humanitarian-aid/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How big data can help residents find transport, jobs and homes that work for them</title>
		<link>https://www.aiuniverse.xyz/how-big-data-can-help-residents-find-transport-jobs-and-homes-that-work-for-them/</link>
					<comments>https://www.aiuniverse.xyz/how-big-data-can-help-residents-find-transport-jobs-and-homes-that-work-for-them/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 04 Jun 2019 04:45:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[housing market]]></category>
		<category><![CDATA[locations]]></category>
		<category><![CDATA[maps]]></category>
		<category><![CDATA[network]]></category>
		<category><![CDATA[property market]]></category>
		<category><![CDATA[transport]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3550</guid>

					<description><![CDATA[<p>Source:- theconversation.com Thanks to the media, more people now know that you have to protect your personal data from being misused for commercial gains. Many of you are <a class="read-more-link" href="https://www.aiuniverse.xyz/how-big-data-can-help-residents-find-transport-jobs-and-homes-that-work-for-them/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-big-data-can-help-residents-find-transport-jobs-and-homes-that-work-for-them/">How big data can help residents find transport, jobs and homes that work for them</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- theconversation.com</p>
<p>Thanks to the media, more people now know that you have to protect your personal data from being misused for commercial gains. Many of you are probably more conscious of what to share on Facebook or Instagram than you were two to three years ago. But, when used appropriately, data can be a great resource that informs urban management and planning.</p>
<p>For example, the RailSmart Platform, a Smart Cities 2019 award winnerlast Thursday, integrates numerous sets of data from the Australian Bureau of Statistics and other data sets such as the public transport ticketing system to work out how the city of Perth functions and how people move around.</p>
<p>Typically, people want to know what areas they can afford that best suit their work and travel requirements. You can use this platform to find out about house prices by location, travel times, locations of strategic jobs and how to get to them.</p>
<p>When you look up the locations of businesses you can see which train stations or major bus stops provide easy access to jobs. If you know the types of jobs, then you will also know whether those jobs are strategic jobs – jobs that create and attract other jobs. If you also look at real estate data, you can then find out the property values and rental prices of nearby properties.</p>
<p>To create this platform, we analysed the big data for Perth and visually represented what we found. To make this information accessible, we created a user-friendly digital mapping interface to display the modelled data.</p>
<p>So what sort of data are we talking about?</p>
<p><strong>Property values</strong></p>
<p>&nbsp;</p>
<p>House prices are one of the key economic indicators that people often pay attention to. Average house prices in Australian capital cities are easy to find, but what about more location-specific prices? You may be renting at the moment and thinking of moving elsewhere, or you may be a prospective property buyer.</p>
<p>Using real estate data, we have mapped the values of properties in different locations. For example, we can show you the number of different types of properties sold (e.g. house, unit, land and other types) and the average sale price of those properties. We can also show the rental values of different locations.</p>
<h2>Access to where people live</h2>
<p>Ever wondered how good (or bad) your local road network is? How about your local public transport? The app can help you with this too.</p>
<p>Using the road network, the public transport network and the timetable data, we have mapped how accessible train stations and major bus stops are to houses, units and apartments. Based on prior research, the tool maps and models real-time analysis of accessibility to people, houses or jobs.</p>
<p>For example, we can show the locations you can get to from a specific train station using your own car or public transport on a map. Our data show that 66% of all dwellings in Perth can be accessed within 60 minutes using a private vehicle from Perth station.</p>
<p><strong>Locations of ‘strategic’ jobs</strong></p>
<p>Strategic jobs include jobs in IT and in academia. For planning purposes, you want to have more strategic jobs that will attract and create more employment.</p>
<p>We can show where strategic jobs are located on a map. In other words, we can show you the locations where you can expect to see more jobs concentrated and created.</p>
<p>Looking at two maps, the access to jobs and the strategic job locations, we can see that only limited strategic jobs can be accessed from Joondalup station.</p>
<h2>The power of data</h2>
<p>Some of you may be wondering how and where we got all these data. Is the dystopian world created by George Orwell in his fictional work Nineteen Eighty-Four coming true, with “Big Brother” watching your every move?</p>
<p>Fear not. The RailSmart analysis does not use any personalised data and all the data sets we used can be freely accessed by anyone.</p>
<p>The platform relies on aggregated data. This means it uses groupings of data or user types – for example, students, or geographic areas such as suburbs. It is impossible to tell what an individual is doing or even the sale price of individual properties; the platform represents trends as patterns of users and areas.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-big-data-can-help-residents-find-transport-jobs-and-homes-that-work-for-them/">How big data can help residents find transport, jobs and homes that work for them</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-big-data-can-help-residents-find-transport-jobs-and-homes-that-work-for-them/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
