Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

From the front lines to the enterprise: The ethical considerations of AI

Iveta Lohovska, principal data scientist and AI ambassador, on the importance of addressing bias.

Social media bots that promote falsehoods in lieu of real information because they deduce it will result in more clicks. Autonomous drones that drop bombs on innocents because a terrorist is suspected to be among them.

As artificial intelligence grows in importance as a business and societal tool, so have the dangers of its potential misuse. From wrongheaded financial decisions to Twitter accounts gone rogue, questions about bias and ethical behavior are becoming more serious than ever. And these are the kinds of things that keep Iveta Lohovska up at night.

Lohovska is a principal data scientist, data architect, and AI ambassador who works with the global AI and data services practice at HPE Pointnext Services. Her expertise runs across a range of topics, including AI, data science, risk management, and data security. Enterprise.nxt recently sat down with Lohovska to discuss emerging issues around AI ethics and how they impact everything from global pandemics to enterprise governance.

What does AI ethics mean to you?

AI ethics has many different aspects, from technical and social challenges to legal and safety aspects of AI accountability and ethics. From a technical perspective, it is a set of rules and processes that ensures the consistency and fairness of an AI model, to the extent that's possible. It also relates to how data gets collected, how it gets handled, and how it gets pipelined and fed into the model.

Please read: The top tools data scientists use

In terms of the human side of things, it's also the background and the diversity of the team and people who are involved in this process. With AI, we have to assure that different biases can be addressed through human intervention. We need to spend more time on the interpretability and fairness and explainability of AI and the models themselves. At a foundational level, it's also important to address all aspects of AI accountability and how and who gets to address that.

Why does all of this matter? Are AI algorithms running amok?

Think about image data, from an X-ray or MRI machine. There are different algorithms that are trained to detect specific things within the human body. If the dataset is trained only on people over age 25 and this MRI machine gets deployed in a children's hospital, then the accuracy and trustworthiness of those machines would be very questionable. I don't think we currently have the checks and balances in place to avoid that. It might be unintentional bias, but it could have a huge impact on humanity.

On the other hand, bias isn't always bad, especially when it comes to human safety. For example, you want your autonomous car to prioritize you instead of a cow or a dog on the street. Intentionally embedding bias into these algorithms and datasets is often a necessary step.

It seems like bias is an inherent part of human nature, so naturally, it will be part of our algorithms.

Misinformation and disinformation is not new. They have existed for as long as humans have existed. The difference today is that with the scale of AI, these trends are amplified, especially with news and social media. Unfortunately, in this case, AI is just tuned to pick up on the most popular trends, and those trends often represent the strongest, most dominant human emotions, namely negativity, outrage, and anger. That's how certain things get amplified in social media.

It's also how things get manipulated. Corrupt governments have their hands on different AI technologies and techniques to spread propaganda, and AI can do it in any language, without an army of people to manually translate things. AI can be an extremely powerful tool for mass human manipulation, unfortunately.

Are you hopeful for a solution, or is this an issue that could be too big for AI to solve?

I don't have a good solution of how it should be handled. There are emerging techniques in combatting misinformation, but there are trade-offs in how much free speech gets lost along the way. For now, I just know that it's dangerous for our democracy, and everyone needs to be attuned to that.

While there are, of course, amazing benefits.

Of course. One good example is how we're using AI to prove that climate change and different population and health dynamics have an impact on tropical diseases and specifically for dengue fever, which is carried by mosquitoes. The developed world has been ignoring it for some time now because dengue has only been present in the tropical parts of the developing world, though the numbers are doubling every couple of years. It hugely impacts the Philippines, Thailand, and India, but as the climate changes, we're now seeing it in Italy, Greece, and France. It's reaching the rich world, so people are starting to pay attention, especially since there's no treatment coming anytime soon for it. With AI we can model and see how climate change is causing ups and downs in outbreaks, and it opens up the discussion for similar situations.

Please read: Using AI to fight wildfires

We are taking baby steps to first map where dengue is, overlaying population density and climate data with dengue cases. We're trying to forecast where the next outbreak might hit, gathering enough data and evidence to build prediction models so we can predict it a few weeks or a month in advance. The main idea is that we can build trigger warnings to notify the public and health authorities with an application or email notification. Again, no medicine or vaccine exists for this disease, but there are many measures that could be taken to reduce the impact of a dengue outbreak, such as increased sanitation or having people shelter in place during a period of high risk, similar to COVID-19, really.

You're also interested in how AI can be used to fight wars more ethically. Is there such a thing as an ethical war?

Ethical or not, there are certain military approaches that could help protect and defend humanitarian missions in different countries and could minimize damage to the people who are trying to help the most vulnerable. AI can be leveraged to analyze information on where populations are and what humanitarian missions are being undertaken there. Algorithms could be trained to recognize individuals and what their activities are—that they are doctors, not soldiers, for example.

How is this kind of research into AI ethics applicable to the enterprise?

I think a lot about AI principles and operationalizing them in ways around the decisions we make about how the sales force and engineering force engage with customers. There are difficult situations where customers have specific intentions that do not match the principles of the company. I deal with this on a daily basis in terms of working with our customers about how they think about AI and how they use it to handle customer data, and guiding them on the right approach.

Please read: AI and sustainability: The most important tech challenges of 2021

The challenge is to align on these concepts on a global scale and avoid having a thousand different approaches to AI ethics. And of course, there's a lot more we can learn about AI and a lot more that we can do with improving AI ethics from many different angles, to make it more responsive and more responsible.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.