It’s Beautiful When Real AI Delivers Real Innovation


Recent Posts

Interest in artificial intelligence (AI) technology is growing by the day. However, there is a corresponding rise in misappropriation of the term AI. In fact, this problem is getting so out of hand that it’s even been regarded as one of the main impediments to real development and adoption within the field.

As awareness of AI’s many benefits spreads throughout the business community, marketers seem increasingly eager to bring anything even remotely related to the technology to the forefront of their brand. The term ‘AI washing’ is now used to describe companies’ tendency to overhype their products as being enabled with some sort of AI when no AI tech is actually implemented within the product or service.

Still, eagerness to jump on the AI bandwagon is understandable, as a great deal of the hype the technology generates is very well deserved. Thanks to AI and machine learning, sales teams can address customer needs more directly and efficiently than ever, human resources are able to vet prospects from around the world and attract the best global talent, and numerous other applications of AI are enabling businesses to improve in ways previously unimaginable.

But there needs to be clarity in defining AI if its advantages are to be broadly realized in meaningful ways. First of all, although machine learning is part of artificial intelligence, there remain important distinctions to be made between the two technologies.

AI Explained

Artificial intelligence is a term that describes the ability to simulate human intelligence via machines. Ideally, AI is able to reason its way through problems and to correct its own error when applicable. The concept of AI dates back centuries, but its modern roots can be found around the 1950s, particularly in Alan Turing’s famous Computing Machinery and Intelligence paper (which is famous for outlining his Turing Test). Interest in AI fluctuated dramatically in following decades, with a number of ‘AI winters’ stalling its development. Nevertheless, scientists have recently declared the start of AI’s “eternal spring” with the rise of big data, along with that of advanced computing algorithms.

Many prominent companies and organizations are already well ahead of the game in terms of research, innovation and acquisitions. IBM has arguably led the industry since the 1950s, particularly with its Watson program, which has served as one of AI’s most widely publicized advancements. Facebook and Microsoft have recently collaborated to build the Open Neural Network Exchange (ONNX) AI coding standard. Google has acquired over a dozen AI startups over the past half decade, in addition to developing their TensorFlow framework, which has since been released under an open source license and become something of an industry standard.

Machine Learning

Machine learning is the specific application of AI that enables computers to process information with little to no human instruction. This process involves feeding large sets of data into computers, which in turn detect patterns or otherwise arrive at conclusions that solve problems. Like AI, the discipline of machine learning dates back roughly to the mid-twentieth century and has seen a dramatic surge in popularity with the rise of big data, along with supercomputing power and modern algorithms.

Along with the likes of Google and IBM, companies like Apple and Twitter have implemented machine learning into their products in notable ways. Siri, the revolutionary artificial voice assistant, has been greatly augmented by machine learning technology; including facial recognition, translation features and other smart services. Twitter, for better or worse, has famously (or infamously) employed ML in attempts to curate user timeline feeds.

Types of Machine Learning

The process of machine learning can be broken down into two major categories, supervised and unsupervised; as well as a third, semi-supervised (which more or less combines the other two for the sake of efficiency and/or cost effectiveness). Supervised machine learning requires human intervention in order to train data sets to arrive at the proper conclusions. Unsupervised learning, by contrast, has no predetermined results, and instead detects patterns among vast and complex data sets. The semi-supervised approach is used when the general problem is known, but organizational structures in the data still require detection before proper conclusions can be reached.

Both supervised and semi-supervised machine learning use a process known as ‘classification’ in order to attribute labels to objects with the aid of human intervention. Unsupervised learning generates data ‘clusters’ in order to identify hidden patterns from which it may infer outcomes. Typical applications of classification include image and speech recognition as well as credit scoring, while clustering may be used for things like gene sequence analysis and market research.

AI and the Cloud

Whatever method that is used to inform AI and machine learning models, the outcome will only be as good as the data that is input into the system. Typically, the more complex the problem, the more data that is needed. The preparation of data into workable sets remains a key challenge for businesses that want to adopt AI and machine learning into their operations.

Cloud technology is playing a key role in simplifying this problem. The XaaS (‘anything as a service’) model has been implemented in the form of AIaaS (‘artificial intelligence as a service’) as well as MLaaS (‘machine learning as a service’) to provide cloud-enabled solutions for businesses that seek cost and time efficient AI and ML adoption. Amazon Machine Learning, Microsoft Azure and Google Cloud AI are all leading the industry in business and user AI solutions that are enabled by the cloud.

The Future of AI

AI still has its obvious limitations. An AI system still can’t mimic human qualities like sympathy, empathy or morality in any reliable way. Although improvements are being made, creativity still remains a challenge for the technology in many fronts. And the ‘black box’ nature of neural networks presents researchers, regulators and investors in the field with all sorts of unprecedented problems, given their limited understanding of AI systems’ outputs.

Nevertheless, AI persists as arguably the foremost technological development of the 21st century, with some even predicting a 50% chance of artificial general intelligence (AGI) being reached within the next decade. And although consensus among experts is difficult to achieve in terms of the pace of AI innovation, all can agree that it will disrupt virtually every industry in existence.