AN INTRODUCTION TO AI AND COMPLIANCE WITH VALUER’S JOSE BELO

From our homes to the workplace, AI is permeating more and more elements of our daily life. According to Gartner, this year, AI software will become a $62 billion industry with an annual growth rate of around 33.2% until 2027.
With AI dependent on data and governing bodies still defining AI-specific GDPR regulations, it’s of little surprise that this disruptive technology has become one of the most hotly debated topics in the compliance sector.
On the one side, you have those who argue that AI poses a serious threat to the values and purpose of GDPR; on the other, you have AI advocates who see its potential for achieving a level of compliance currently beyond human ability.
To weigh in on the issue, we asked AI specialist and Complyon customer Jose Belo to share his experience of working with AI and compliance, along with his view on some of the possibilities, limitations and misconceptions of machine-learning technologies.
First up, meet Jose…
Specialising in privacy and data protection, Jose has a deep knowledge of internal compliance after moving from a legal background to lead hands-on privacy programs for financial organisations in Luxembourg and London.
Now, as Head of Privacy at AI platform Valuer.ai, Jose works with artificial intelligence to understand what it does and doesn’t do while building on his understanding of how to harness the power of technology to best protect data.
Here, he gives us an insight into his current role and explores some of the most debated elements of AI and compliance.
Valuer.ai is described as “a digital brain, working with the heart of your business”. What sorts of AI services do you offer clients?
Let’s make it as simple as possible. Valuer is a company that helps you find other companies anywhere in the world that use technology to answer the challenges your business currently faces.
What is out there when it comes to sustainability or cyberspace security that makes sense to us? Has a company found a different approach towards transportation? Or energy? How can we improve costs in our supply chain since we have this bottleneck we cannot seem to fix?
You type in what you’re looking for, and by using AI, clustering and natural language processing (NLP), the Valuer platform matches you with businesses that provide answers to your specific challenges, no matter what they are.
Due to the sheer brainpower of our AI, the platform can search between 500,000 to a million companies per query in a matter of seconds. The AI then divides these matches into two groups: those solving your issues in the same way (clustering them into one group) and those solving your problem in a different way (clustering these solutions outside the main cluster).
This approach lets you find out how companies are solving your issues differently, alerting you to fringe solutions that can help you get ahead of the competition. Maybe you find something in the fringes of the cluster that is ahead of the field. Then, you could ask yourself, “Should we follow the current trend or is this new trend I’ve never heard about actually solving the challenge in a better way?”
For example, this is how I discovered differential privacy, by looking at a cluster that contained several companies that were not in the main cluster. Intrigued, I thought, “what are these companies doing in the same way but different from everyone else?”. That led me to differential privacy.
How do you comply with privacy legislation when the data you’re working with is AI data?
To put it very simply, the way I look at it is that currently, AI is just another data processing activity. There’s an input of data, a middle (where data is processed), and an output of data.
When it comes to AI, this middle phase could involve many different approaches such as neural networks, machine learning, deep learning, natural language processing and computer vision. For example, neural networks are very interesting because they reflect the behaviour of the human brain by using nodes, which act like neurons. It allows computer programs to recognise patterns and solve common problems in the fields of AI, machine learning, and deep learning through a complex thought process. It’s the closest we have to a “thinking” AI.
Say you have a robot. Giving it a neural network, you tell the robot, “Walk from A to B”. You don’t give it any other instructions, and it has to use its legs to work out how to walk through trial and error. A couple of videos show this example, where you can see a robot managing to go from A to B, completing its objective.
The funny thing is, in one of the experiments (I think it was from the amazing “Lo and Behold: Reveries of the Connected World” by Werner Herzog) when researchers checked the neural network of the robot, they discovered that the robot was actually tracking the faces of the researchers – even though it wasn’t told to do that. Was it looking for positive reinforcement? Does it know that it’s on the right path when they smile and on the wrong path if they laugh? Has it taught itself a quicker way to go from A to B through validation? No one knows.
That’s how far ahead we are with AI. But at the same time, that data processing itself was benign. Even if it was processing Article 9 data unexpectedly, the data processing caused no harm to the rights and freedoms of the data subject. The AI may be able to track faces, but what can it do with that? Nothing. Plus, the output remains the same.
So when it comes to GDPR, yes, that’s biometric data, so a special category of data, and if you take the above example, we wouldn’t know why the AI has chosen to process the data in this way. We either add it to the processing activities or tell it not to track people’s faces. But it’s all reactive; you’re not aware of it until you check.
And that is what is so interesting about AI and data protection; it’s a data processing activity and a very interesting one because, unless told not to, it doesn’t really care about the GDPR.
What you need to be aware of is the ethics around this, especially if you’re a B2C company. We deal with B2B enquiries, so most of the data in our system is non-personal company data that is out of the scope of GDPR.
Still, I believe that non-personal data (including personal data that has been anonymised) will become more and more important, especially with the EU’s data strategy. It will then require more stringent rules, and while, of course, it’s not personal data, I wouldn’t say it’s non-personal data per se.
Can you explain why the ethics side of AI decisions in regards to data privacy could be problematic?
With personal data, AI is making decisions that impact people’s lives, such as credit scores, candidate selection or insurance claims. It makes those decisions using historical data, which is where issues can arise.
It seems historically, we’ve made bad decisions as humans, and we are now expecting a machine to fix them all. For example, if you want to take out a loan, historically, women are less likely to get loans. Similarly, more black people have been refused loans in the past than white applicants. AI uses this information to make its decisions, so the data we’re feeding the software and its training is biased.
Your AI solution will only be as good as the data you give it. Currently, if you’re feeding it human data, it will decide as humans do – and we can’t expect a machine to fix these biases all by itself. That’s not going to happen unless we change something.
So what worries me is not the data processing involved with AI. What worries me is the data that we put into our systems. We need to provide data to AI that cleans our own biases. But can we recognise our own biases? Is it even possible to have bias-free data? Would we even recognise our world, or ourselves, without bias? There are too many questions and very few answers.
Could AI ethics also pose any challenges within B2B industries?
In the very near future, I can foresee issues around AI decision-making for the B2B sector.
The processing of data may not be problematic; it is the decisions the AI takes that may be an issue. Are we able to explain them?
We also tend to speak of AI as this universal machine that every company has access to. That’s not true. It’s made through code produced by different humans with different backgrounds and for different purposes. The outcomes will be varied depending on the software you’re using. And ethically speaking, those AI outcomes will be as ethical as we can allow them to be if the humans writing the code care about ethics and embed them in the code.
Valuer enables their clients to keep up to date with the latest innovations and opportunities. What are your top predictions for the future of AI and its relation to the privacy and compliance sector?
I think AI and cybersecurity will go hand in hand, with AI being of great assistance for protecting organisational and personal data, particularly privacy-enhancing technologies.
With more AI-powered technologies in the privacy sector, we’ll learn from the practices of the people who protect privacy and will then be able to predict trends and ways of protecting data that we are currently not aware of. In this respect, AI will take on a more proactive and protective role than we humans can.
To be honest, I also see AI everywhere, not just within security services. We just have to get used to it because it has incredible potential to help us. However, there’s this fear that with AI, we finally have the tools to live in George Orwell’s “1984”. The reasoning behind this is because, over the past decades, we have witnessed this large collection of our data alongside AI’s almost endless ability to process data quickly – minutes, instead of years, if done by humans. So, the potential insights AI can bring to how our society is shaped, at macro and micro levels, is undeniable.
The possible insights from this access to data can be a great thing. We can get a new understanding of how our society works and improve public health, municipal services, and consumer products. But as data subjects, we are the data, and our data is being used by AI to make decisions about us – and sometimes, these decisions may have consequences on our lives.
So, will privacy still be relevant in the age of AI?
To me, privacy matters even more now. There’s so much talk about AI Ethics, but people forget that ethical AI is only possible if we use data that follows the rules of GDPR.
How can you speak about ethical AI when the data you feed it has been obtained from data subjects without consent (or any other type of GDPR legal basis)? How can you call your AI ethical if the data in it was obtained illegally?
So, if you speak about AI Ethics, you will always have to consider if the data you use was collected legitimately. Otherwise, if the personal data you provide the AI with was not collected with the GDPR in mind, no matter how ethical you claim your AI algorithm is, the results are nothing but the fruit of a poisonous tree.
Ultimately, when it comes to AI, we must always consider that data protection and respecting the fundamental right to privacy are as important as they ever were. If not more so.
You can follow Jose Belo for more insights on AI and privacy via LinkedIn here. Keep an eye on Complyon’s LinkedIn for the second part of our interview with Jose, where he discusses the role of technology in his company’s compliance activities.