Source – deccanherald.com
It was 1956 when attendees at the Dartmouth conferences created the field of Artificial Intelligence (AI), opening the storm gates for creativity and imagination in Hollywood, and across the scientific world. Everyone wanted to know if AI would be the key to our civilisation’s future or the key to open Pandora’s box, enabling robots to wreak havoc upon mankind. Since then AI has exploded into our lives and language as graphic processing units (GPUs) have made parallel processing faster, cheaper and more powerful. Processing power, combined with enormous amounts of data and nearly infinite storage of information, has launched us into unimagined applications of AI.
AI has not evolved into the broad, general use originally imagined in 1956, when scientist and citizen alike imagined AI would enable machines to capture “every aspect of learning or any other feature of intelligence…,” and ultimately outthink humans in every aspect of our lives. The concept was unworkable due to the enormous amount of computing power needed to parse, store, identify or tag the information, and then retrieve it. AI has evolved much more narrowly through two key subsets, or subfields: that of machine learning and deep learning, which have given us the breakthroughs we enjoy today.
As scientist began addressing limitations in technology, they slowly developed the pathway for machine learning throughout the 1980s and into early 2000s, enabling machines to take data and “learn” for themselves by focusing more directly on the issue of using algorithms to parse the data, learn from it, and then make a prediction or take an action concerning the world. Early machine learning approaches included the use of ‘decision trees’ for learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks. None of these approaches achieved the envisioned AI goals. A machine learning concept known as ‘computer vision’ did, however, develop as one of the machine learning applications with very useful potential for operations. By hand-coding classifiers, such as edge detection filters, computer programmes could identify objects such as road signs from the sign’s shape, colour or recognition of the letters. From these recognition classifiers, computer scientists could then develop algorithms categorising particular signs, thereby enabling the machines to “learn” to differentiate between them, so as to “think” about taking a particular action. This approach worked great under certain, ideal conditions but until recently was too inflexible or error-prone to be of much practical help in foggy, rainy, or snowy conditions.
It was not until 2010 when ‘Deep Learning’ was introduced that AI, using narrow AI in which machines are skilled at one particular task, really began to take off. Deep learning uses some machine learning techniques to solve problems through the use of neural networks that simulate human decision-making. Only through the advent of massive “big data” sets to train the machines to identify the huge number of parameters used by a learning algorithm have we been able to make advances in this field. Previously, programmers provided a set of rules by which the algorithms operate. By being able to quickly sort through huge amounts of data to recognise certain characteristics, deep learning has enabled us to advance text-based searches, provide fraud detection, spam detection, and handwriting recognition as well as conduct image searches, speech recognition, and street view detection.
Because of these advances, machines can be trained in image recognition in some scenarios to provide better recognition than humans. These applications have been used to identify indicators for cancer in blood and tumours through MRI scans. They have also been demonstrated in Google’s AlphaGo game, where DeepMind AI beat world champion Lee Sedol in four out of five games of the Chinese game Go. Advances in synthesisable AI by researchers at Louisiana State University in collaboration with Florida International University are enabling other applications to be incorporated in driverless cars, pharmaceutical preparation, preventative healthcare and a variety of other programmes.
Andrew Ng, an AI pioneer at Google and an adjunct professor at Stanford University, recently said, “AI is the new electricity, with the capacity to transform every major industry”, and our lives.
Digital forensics and the fight against crime will be no exception to the rule, as criminals incorporate more advanced cyber methods to commit crime and law enforcement agencies push to gain a counter-advantage. Digital Evidence & Electronic Signature Law Review published a study asserting, “Digital forensics is an area that is becoming increasingly important in computing and often requires the intelligent analysis of large amounts of complex data…AI is an ideal approach to deal with many of the problems that currently exist in digital forensics.”
Deep learning is finding its way into the development of sophisticated systems for DNA sequence matching, innovative new methods for cybercrime detection using mobile devices, and for assisting with identity recognition, digital and physical signature recognition, and terrorists’ cyber operations. Researchers at Florida International University’s School of Computing and Information Sciences are collaborating with a team at the University of Florida to develop new hardware and tools to assist forensics experts in the field by providing advanced technology for data collection, identity verification and evidence processing of biometric data.
This is an exciting new area that will increasingly touch our lives over the next few years. The revolution has begun!