The thing that’s going to make artificial intelligence so powerful is its ability to learn, and the way AI learns is to look at human culture.” Dan Brown

I think that we can’t deny that Artificial Intelligence (AI) revolutionized the world. Everyone agrees that AI is a great technology capable of changing the world, and from countering terrorismto space exploration and even creating sophisticated art, its potential is becoming apparent.

Since 2000, we have seen a 6 times increase in the yearly investment levels into US-based artificial intelligence startups by venture capital (VC) investors.

In 2017, the international AI market was worth about $4.8 billion, and it’s projected to soar by almost twenty times to a massive $89.8 billion by 2025. Hence, it makes a lot of sense that more people are now investing in AI. This is evidenced in the following quote by Joe Kaeser.

Artificial intelligence is here and being rapidly commercialized, with new applications being created. This will change how we all do business.”

AI is powerful because of its amazing learning ability.

The term “artificial intelligence” gained popularity in the US in 1956 at a conference at Dartmouth College. That conference brought together leading researchers on an extensive range of topics, from learning machines to language simulation, which was a great step in my opinion.

With Microsoft Azure, Amazon and Google, launching their advanced and impressive cloud machine learning platforms, artificial intelligence and machine learning has gained a lot of prominence in the past few years. Surprisingly, most of us have witnessed machine learning without really knowing it. A few of the most popular instances and uses of machine learning are spam detection by various email providers, as well as ‘Face’ or ‘Image’ tagging by Facebook.

A majority of industries that work with huge volumes of data have gradually recognized the importance of AI. By gleaning valuable insights from the data, usually in real time, companies are able to gain competitive advantage and work more effectively and efficiently.

The adoption of a variety of autonomous vehicles on a wide-scale represents a very efficient and promising future for the transportation industry. Several reports show that self-driving vehicles can lower traffic-related deaths by up to 90 percent.

Although we are probably some years away from mass production, you should know that the adoption of these vehicles, at this point, is almost inevitable. That being said, the time frame for the adoption of autonomous vehicles mainly depends on regulatory and legal actions, which usually lie outside the control of the tech world.

Centuries ago, people did “forecasting” with crystal balls and tea leaves. Nowadays, we prefer more “intelligence” and smarter solutions in our business.

Forecasting has come a long way since those primitive days. We have transitioned from gut instinct, to statistical forecasting using MS Excel and demand modeling. Still, there is a lot of room for improvement. 

However, despite the progress, AI still faces considerable hurdles, challenges and risks that will have to be adequately overcome before maximizing its potential. I think that one such risk is privacy.

Privacy, in the information age, hinges considerably on our ability to control the way you store, modify, and exchange data between various parties. We can define privacy as having the power or control to seclude ourself, or information/data about ourself, to effectively restrict the influence other people could have on our behavior. Traditionally, privacy is a prerequisite for exercising basic human rights, like the freedom of association, the freedom of expressions, and freedom of choice. And in this digital age we cannot overemphasize the importance of these rights.

When it comes to threats regarding our privacy, there’s one discipline of artificial intelligence that gives people concerned with data privacy pause: machine learning. In my view, the ever-increasing power of artificial intelligence is now muddling the clarity and agreement between privacy and this is leading to data breaches and security issues; however, with innovation we can tackle this issue.

There are obvious issues here which I think that we need to address. Keep in mind that a system is only as good as the information or data that it learns from. Take, for example, a system trained to learn and predict which patients with a disease like pneumonia had a comparatively higher risk of death, to ensure that they get admission to the best hospital. The system inadvertently classified asthma patients as being at a lower risk.

It was because in standard situations, individuals who have pneumonia as well as a history of asthma usually go straight to ICUs and hence receive the type of treatment which significantly lowers their risk of dying. In this case, the machine learning and AI algorithms took this to imply that asthma and pneumonia meant a lower risk of death. In my opinion the results would have been different with a little human intervention.

The amazing speed at which artificial intelligence performs computations is much faster than what human analysts are capable of. Also, you can increase this speed arbitrarily by adding more high-tech hardware.

China is currently developing a quite scary Orwellian surveillance statewith AI, in particular facial recognition. In my view, this might also impact the West if we’re not careful. Moreover, AI is inherently adept when it comes to utilizing huge data sets for quick analysis, and is perhaps the only way for processing big data in a decent amount of time. 

Last, but not the least, AI is capable of performing designated tasks without any supervision, which considerably improves analysis efficiency. All these characteristics of artificial intelligence enable it to compromise privacy in a number of different ways.

Artificial intelligence can help identify, monitor and track people across multiple devices, whether they’re at home, at work, or at any public location, which is a little scary if you ask me.

This implies that even if you make your personal and confidential data anonymous when it becomes a part of a larger data set, AI could easily de-anonymize your data on the basis of inferences drawn from other devices. Unfortunately, this blurs the line between non-personal and personal data, which you need to maintain under current legislation.

Facial and voice recognition is another area of concern as far as I am concerned. Facial recognition and voice recognition are two popular identification methods that AI is becoming highly adept at executing. The incredible speed at which facial recognition improved comes down mainly to the quick development of a kind of machine learning called deep learning

Note that deep learning uses large tangles of computations, almost analogous to the complex wiring in a brain, for recognizing patterns in data. Now, it is able to perform pattern recognition with unbelievable jaw-dropping accuracy. This may let you unlock your smartphone with a smile, but keep in mind that it also means that big corporations and governments receive a powerful new surveillance tool, which can be misused in my opinion.

Picture this. You have been abducted by criminals who want to hack into your iPhone X to get some juicy information you have stored there. If you have password protected your iPhone, they may need to work harder than usual to get into it; however, with facial recognition all they have to do is hold your iPhone X in front of your face.

These advanced methods have the potential to considerably compromise our anonymity in the public sphere. Law enforcement agencies, for example, can use voice recognition and facial recognition to find individuals without reasonable suspicion or probable cause, thus circumventing strict legal procedures that they would otherwise need to uphold.

reportalso warns regarding the inappropriate use of emotion tracking in many voice detection and face scanning systems. It is worth mentioning that tracking human emotion this way is fairly unproven, yet it’s in use in various potentially discriminatory ways, such as to track the attention of students, and I think it is inappropriate ethically and morally.

Data manipulation, like making changes to pixels in an image, can help alter a system’s decision-making capabilities. Artificial intelligence experts knew about this danger, but Gartner Inc. and Deloitte LLP are warning this threat is likely to increase and I agree with their views.

A number of consumer products, from computer applications to smart home appliances tend to have specific features that make them more vulnerable to data exploitation and manipulation by AI. And to make matters worse, a majority of people are often unaware of how much data their devices and software generate, exchange, process, or share. I think you will agree with that.

The potential for data exploitation would only increase as we become more dependent on digital technology in our daily lives. Businesses and governments need to take concrete steps in order to reduce the various risks posed by hackers and cybercriminals who can manipulate machine learning data.

The application of AI and big data, where advanced algorithms mine huge swathes of information in order to make predictions regarding future behavior, is now increasingly evident for both policing and criminal justice. And it’s quite easy to understand why.

The promising possibility that scarce police and law enforcement resources can be effectively targeted at people who are more likely to commit crimes, or that decisions, such as granting bail, could be made in a more effective and reliable way to ensure that only the riskiest individuals are incarcerated before their trial, are both highly attractive propositions. However, I think there are certain risks involved as well.

One relevant example is a Pro Publica investigation in 2016, which revealed that COMPAS software deemed biased against black offenders.

It is worth pointing out that AI is not only limited to conducting information gathering tasks. AI can also use personal data as input for various purposes, such as sorting, classifying, scoring, evaluating and ranking individuals. It goes without saying that this is usually done without any express consent on the part of individuals who are being categorized, which is unethical in my view. In addition, these people usually have no ability to challenge or change the results of these tasks. 

For example, China’s social scoring system is a great example of the way this data could be used to restrict access to things, such as housing, credit, employment, or social services in the country. Many political, social, and criminal justice inequalities are likely to arise as a result and they should force us to question the potential and risks of predictive policing.

Conclusion

As AI is rolled out for assessing everything from your suitability for a job that you’re applying for to your credit rating and to criminals’ chance of reoffending, it is evident, at least in my opinion, that the risks it will oftentimes get it wrong, without us even knowing, get worse. However, we can manage these risks.

Artificial Intelligence will not see a decline any time soon. Its growth will continue in 2019 and beyond and the focus would not just be on new applications and technologies in the industry, but also the way it intersects with the society.

As so much of the raw data that humans feed AIs with is imperfect, we shouldn’t expect perfect results and answers all the time. Recognizing this fact is the first and most important step in managing risks associated with AI. 

Raj Raghavan, Founder, Credio Inc