Skip to content

<blog>
Ethical AI

Chris Mauck

April 5, 2024 • 6+ minute read

Image credit: Derivative image based on an image by jcomp on Freepik. 'Ethical AI'

Originally appeared in LinkedIn Future Singularity

When you think about AI (Artificial Intelligence) what about it can help us to create a better society? A society that will accommodate citizens, whether they are patients or consumers, and will give them an opportunity to lead a flourishing and good life together?

AI was once thought to be achievable, but without a clear idea of when exactly that may be possible. In recent years, after the welcoming of ChatGPT in late 2022, there has been an explosion of AI "tools" and "resources", an AI renaissance of sorts. This openness to the unknown, simply grasping onto the tech and heading full force into the future brings promises of great reward for creators and users alike, but how can we ensure that the technology is being leverage ethically?

The "answer" is proposed to be "responsible innovation". But how can that be achieved? If it's truly that AI simply regurgitates the existing data as the stochastic parrot then we seemingly should know the data, outcomes, and inferences.

A famous quote from Sir Winston Churchill in his speech to the House of Commons on October 28, 1943, when he commented on the rebuilding and renovation of the parliament building after bombing in the Second World War "first we shape our buildings, and afterwards, our buildings shape us". Some say that the quote refers to more than just creating structures, but also that we are creating environments of trust. By that same token, we are now in the process of building a new digital society that AI plays a vital role in.

Data and High Hopes

In a session from #SXSW 2024 Reggie Townsend, Vice President Data Ethics, SAS, had some great points on how AI uses old data from lived experiences. This data is included in models to then create "generative" outcomes. Essentially meaning that continued progress comes at the behest of others. With the growth of AI, the advancements come at the cost of our past, our technologies, our humanity. Unwittingly including the wrongs of our past into these models doesn't improve the future output. We're at a moment where we can "digitize new decisions" and reevaluate old decisions, as opposed to continuing the disparities and bias whether they be based on income, gender, race, or religion. Current decisions are derived on a deep-rooted system based on the changes, or lack thereof of the past. Even though some attempt to work around these challenges and assert their models will be different, there remains the root of the bias.

Brigitte Tousignant, Head of Communications at Hugging Face was quick to interject the words of Uncle Ben (from Spider-Man) noting that "with great power, comes great responsibility". In other words, we need to take steps to ensure that human ethics and morals are included in the definition of the models. Too many times social media, and now AI, is used to spread misinformation. How can we move towards a more collaborative system as we move forward with the foundational model generation. Conversations are and should be had based on how and why we are creating the models that we are.

There's a lot of work being done by people at Hugging Face, NGOs, and nonprofits, really to push for more responsible development of foundational models. Part of the goal is to make the researchers and developers actually think about the decisions that they're making and the past experiences that are going into these models. Instead of just let's roll this out, it's should we roll this out? How is this going to impact people?

Alexandra van Huffelen the Dutch Minister for Digitalization, Ministry of The Interior and Kingdom Relations, is hopeful. Hopeful that AI will be created to target the weaknesses in our world. Hopeful that it can be used to find ways to benefit the hungry and disadvantaged. We need to create a cooperative system between innovators, creators, governments. We need to ensure that the idea of this is fundamental. That this huge and impactful technology that can do so many things for good can only be for good if we all work together, which means technologists, the market, governments, and NGOs alike need to be there to make sure that it's really based on human values.

Troubling Times

All too often people offer strictly binary arguments which simply aren't helpful. It's either the technology will lead us into a complete dystopia or help us create a total utopian paradise. The logical and most common outcome often lies somewhere in between. We should always take the time to consider the benefit of what is being done. We shouldn't simply be rushing to build things just to build them. It's this jump into the technology and create tools quickly that puts people on edge.

People somehow make the assertion that this technology is going to wipe out humanity. It's hard to picture getting us from generative text and images to the destruction of the human race. What steps actually get us from where we are now to our destruction. "Don't put machines in control of things like military/weaponry."

We should consider 3 points when deciding on what we should build with AI:

  • For what purpose - why are we doing it?
  • To what end - how far will we go?
  • To whom might it fail?

If we can't answer these three basic questions, then we shouldn't move forward with the concept.

Governments have already somewhat agreed on limiting autonomous weapons systems. However, as it happens, these same bills and legislative stipulations automatically call for the inclusion of companies that often "want to run to be the first". How can they be properly regulated? Alexandra noted that what makes her "scared and worried" are comments by those like Sam Altman when he suggested "regulation will bring us back to the stone age". She is actually more concerned with the companies pushing to build "bigger" without the concern for regulation.

Without a doubt, it's about due diligence. Ensuring that these companies that are developing AI products for the world, or for their own purposes, that they have to demonstrate with proof and evidence that they bring to regulators on what safeguards they have in place.

Alexandra interjected that it's similar to the AI Act in the EU. AI being regulated simply the same way that medications and food/agriculture are regulated.

Moving Forward

Companies like Hugging Face work at remaining grounded in "ethical openness". Ensuring that they incentivize systems that mitigate harms, but also promote appropriate credit, consent and compensation for artists and content creators. Institutional documentation and technical safeguards are put in place and being added all the time to protect the creators from the breaches that have been seen by systems like ChatGPT (just using what is meant to be private/sellable content).

Hugging Face includes gated access to artifacts, which is important, for example, for healthcare. Meaning, that if you have a large language model for healthcare, having gated access to this, to the weights, to the data, can ensure privacy. There are also stage releases, or gradient release of models to make sure that it is safe as it's moved to a stage for more general use. To date, approximately 500K models exist in the Hugging Face creator's hub, as well as over 250K demos.

Beyond openness, we also need to focus on AI literacy. In the U.S. the level of understanding what AI "is" only sits at around 30% of the population. There is a high level of awareness so that people can identify with the term AI, but they don't really understand what it is, what it can do, and notably what it is not.

Countries like the Netherlands require registration of AI tools used or created by government bodies. They are operating on the hope that this will lead to companies and government wanting to comply because the benefit of communities of people and corporations working together.

Unfortunately, it's important that literacy on the subject exists before laws are mandated. Simply mandating regulation can lead to fear-based responses and that's a consequence of insufficient literacy. This is exactly one of the reasons why an AI literacy recommendation was presented to the White House in 2023. Literacy not only promotes trust, but also ensures that people are able to see and understand the ethical implications. That's more of an emphasis on the moral phenomenon involved when making a distinction between the fear that is induced by people who do not believe that you can do it. This can boil down to a competence issue or a knowledge issue. Leaving people to think that you lack the expertise and won't see the regulations through. Or that you won't do it because it's not in your self-interest.

In order to solve that trust problem, we need to be very clear and very transparent and articulate about where we're coming from in an ethical point of view, because that's the evidence that people are looking for.

In Summary

AI brings opportunities and risks that require ethical considerations. These opportunities should be used towards the creation of a better society and to mitigate societal problems.

But it's crucial to guarantee ethical AI by ingraining moral values into the training set from the start. In order to innovate responsibly, one must address ethical issues, encourage transparency, and consider the source data and any potential biases. Governments should endeavor to appropriately govern AI without impeding innovation, as they have already started to do. Collaboration between innovators, creators, governments, and NGOs is essential to guarantee that AI is morally grounded and advances society. Additionally, AI literacy is vital to foster trust and promote informed decision-making on the ethical implications of AI.