Major AI advances not without questioning
Artificial intelligence has been unveiled to the public and is starting to transform our way of working, interacting, and deeply changing our everyday life.
We haven't seen anything like this since the invention of the printing press 6 centuries ago: the world is changing, and at breakneck speed. 🚀
Web giants such as Alphabet, Amazon, or Microsoft, as well as personalities like Elon Musk (who released the second version of his Grok AI), think that it's time to talk about the (near) unlimited potential of these new technologies.
It's fabulous, but these quick changes also raise major moral concerns. No other field has ever shaken the ethical compass to such an extent. 🧭
Many questions arise from the AI penchant for integrating prejudices, contributing to our environmental damage, threatening fundamental human rights, etc.
These risks related to artificial intelligence may add to current inequalities, causing an additional prejudice to already marginalized groups.
Moral and legal considerations of AI
Artificial intelligence is the subject of controversies. There are many philosophical and ethical discussions about it around the entire world.
The use of tools such as ChatGPT has led to a strike of authors in 2023. The main question was: even if AI can be used to write scenarios, is it judicious and/or moral to do it? 🤔
Beyond ethics, AI also presents legal problems. Without clear and precise guidelines, there is a risk of abusive use and legal problems.
Artificial intelligence can make you more efficient, and it's certainly a good thing. However, this increase of our capacity to produce could have a cost (moral, ecological, social...).
Here are 14 ethical questions to ask about AI...
As we invent ways to automate jobs, we allow people to take on more complex roles.
For example, industrialization enabled workers to move from physical tasks that dominated the preindustrial world to intellectual work that characterizes our globalized society.
With the development of AI, many jobs might disappear or, at least, evolve.
Let's take the example of truck transportation, which currently employs millions of workers worldwide. Tesla's self-driving trucks may become widespread in the coming years, making recruitment in this sector increasingly difficult. On the other hand, if we consider the lower risk of accidents, this seems to be an ethical choice.
The same scenario could happen again in many professional fields in developed countries.
We might fear that automation will replace some aspects or entire professional roles, which could boost the unemployment rate in several sectors.
By the way, artificial intelligence already plays an important part in marketing, communication, or
web design!
But this can also improve quality of life... ✨
If this is the case, we can wonder what we're going to do with our newfound availability. Most people sell their time in order to have enough income to meet their needs and feed their families.
Hopefully, AI will enable us to invest more in community activities and find new ways to contribute to human society.
2. Privacy protection
AI model training requires large amounts of data, some of which contain IIP.
Definition
In general, IIP (Initial Intellectual Property) refers to the
initial intellectual property of an idea, invention or creation. This may include patents, trademarks, copyrights, or registered designs, before these rights are transferred, licensed or commercialized.
Nowadays, we don't really know how data iscollected, processed, and hosted. This raises concerns.
The use of AI within the framework of surveillance (by the police, for example) leads to additional concerns in terms of privacy protection. Even though these tecnologies are very useful, many people worry about their abusive use in public spaces, which could harm the individual right to privacy. 🎥
3. Intellectual property exploitation
As seen before, a recent trial against ChatGPT, involving several famous writers, drew attention to the question of intellectual property exploitation by AI.
Several authors, such as John Grisham, recently sued OpenAI because the company used their publications to train its algorithms.
The lawsuit also highlighted that this type of exploitation endangers the ability of artists to make living from their writing. ✍️
If you want to discover how AI contributes to
copywriting, don't hesitate to read our dedicated guides!
Our economic system is based on our contribution to collective production, paid as an hourly wage.
However, by using artificial intelligence, a company can considerably reduce its human labour costs, which means that benefits will be shared among a limited number of people. As a result, individuals with stakes in AI-based companies are likely to get the lion's share, to the detriment of others. 🍰
It's already clear that the wealth gap is widening, with start-up founders appropriating a large part of the economic surplus they create.
5. How AI can affect human behaviours and interactions?
Robots can imitate conversations and replicate human relationships with increasing accuracy.
In 2015, a robot called Eugene Goostman won the Turing test for the first time. The principle is simple: evaluators chat in writing with an unknown entity and try to guess if it is a human or a machine. Eugene Goostman managed to fool over 50% of the evaluators.
This achievement marks just the beginning of an era where we frequently interact with robots as if they were people, for example, in reception or customer service roles.
Even if we aren't all fully aware of it, machines are already regularly stimulating the reward centers of our brains in our daily lives. Just look at how video games captivate internet users.
Addiction to technology is now an integral part of human addictionology.
6. AI bias and errors
Intelligence comes from learning, whether you are human or a robot. 🎓
AI models typically undergo a training phase, during which they 'learn' to detect patterns and react based on the analyzed data.
Once a system is fully trained, it moves on to the test phase, where it is confronted with numerous examples of real-world applications.
Of course, the real world is different from these laboratory tests. These systems can make mistakes. For example, a machine can 'see' things that don't exist.
If we want to rely on AI, we must ensure its security, efficiency, and incorruptibility (preventing people from exploiting it for their own benefit).
7. Moral biases of AI: racist, sexist, or intelorant robots
While artificial intelligence is faster than humans and processed more data, it is not always neutral.
AI partiality raises another ethical problem. Even though artificial intelligence isn't intrinsically biased, systems are formed with data that come from human sources, which can lead to the spread of prejudices.
For example, an AI-based hiring tool may discriminate based on demographic criteria if the datasets used to train the algorithm contained biases against a particular group.
Google is one of the leading-edge companies in AI, as seen in its software that uses AI to identify people, objects, and scenes in images. However, some things can go wrong: their program designed to predict future criminal behaviour showed bias against people of color.
In conclusion, if AI is used to promote social progress, it can become a powerful vector for positive change, but we must remain vigiliant. ⚠️
8. How can we protect AI from malware?
The more powerful a technology is, the more it can be used for both positive and negative purposes.
This applies not only to military robots or autonomous weapons but also to AI systems.
Cybersecurity will become increasingly important, as protection systems can have a significant impact. 🔒
For example, AI can be attacked, which may lead to incorrect behaviours in autonomous vehicles or the concealment of objects on security cameras.
While AI can customize search engine content based on your preferences and improve customer service through the use of chatbots, some people fear a loss of social connection, which could lead to a general reduction in well-being.
Furthermore, if everything you see on social media aligns your own opinions, it becomes unlikely that you'll develop empathy or solidarity for others.
10. Could AI become our enemy?
What would happen if artificial intelligence turned against humans? 😳
At first glance, AI couldn't become 'unkind' like a human being, or as depicted in Hollywood movies.
It would be surprising for a robot to have malicious intentions. However, we can imagine scenarios where AI lacks an understanding of the broader context.
For example, imagine we ask artificial intelligence to eliminate all incurable diseases in the world. After extensive calculations, it concludes that the most effective solution would be to eliminate all humans. The computer would technically achieve its objective efficiently, but not quite in the way it was intended. 😬
Misinformation creates social divisions and perpetuates false and harmful opinions.
Once fake information is widely shared on social media, it can be difficult to trace its origin and fight it. AI tools are sometimes used to spread false claims, giving the impression that they are legitimate when, in fact, they are not.
Deep fake cases
The use of "deep fakes" also raises ethical issues.
They are capable of bypassing voice and facial recognition, allowing them to ignore security measures.
There are additional moral challenges in cases of identity fraud. The use of deep fakes use to influence the public opinion can have considerable implications and pose serious security risks.
12. Originality: how to keep control of a complex intelligent system?
Human dominance in the food chain is not due to extraordinary physical abilities.
Human supremacy is almost entirely due to ingenuity and intellectual superiority. 🧠
If we are able to overcome animals much stronger than ourselves, it's because we are create and use (mental or physical) tools to dominate them.
This raises an important question about artificial intelligence: will it one day have the same advantage over us?
This is known as "originality": the moment when humans will no longer be the most intelligent living entities on this planet.
Notion of explicability
Developing AI tools and observing them in action isn't enough.
It's important to understand decision-making processes of these intelligent applications.
Sometimes, it can be difficult to grasp the reasoning behind certain AI decisions. This lack of understanding can have harmful consequences, especially in sectors like healthcare or law, where influencing factors must be considered and human lives are at stake.
13. Ethical challenges of responsibility
The increasing prevalence of artificial intelligence in various sectors means that we use AI tools to make decisions on a daily basis.
If these decisions lead to negative outcomes, it can be difficult to determine who is responsible. Should companies validate the algorithms of the tools they purchase? Or should they hold the AI creators accountable?
The question of responsability is a major issue that needs to be addressed. ⚖️
14. What rights for robots?
While scientists continue to investigate the secrets of consciousness, we now have a better understanding of the mechanisms of reward and aversion.
In a certain way, we use these types of systems to create artificial intelligences. For example, what is known as reinforcement learning is comparable to dog training: improved performance is encouraged by a numerical reward.
For the moment, AI systems are only partially able to react to these mechanisms, but their reactions are becoming increasingly complex and similar to ours.
Once we start considering machines as potential sentient entities, it becomes logical to ask questions about their status. How should we treat them? Should we consider their "suffering"?
What is being done to regulate AI technology?
There is still a great deal of uncertainty surrounding AI technology and its applications.
At present, we have very few laws concerning artificial intelligence.
In 2021, UNESCO, an international organization dedicated to creating a peaceful and sustainable world, developed the first global standard for AI ethics. 📜
This includes recommendations on safety, monitoring and supervision, privacy and data protection, respect for ecology, legal and moral responsibility, awareness, and non-discrimination.
It is to be hoped that these principles will soon be adopted by most countries. 🌍
Many experts argue that AI ethics are essential for a responsible future.
Although this topic carries a high level of uncertainty, there is a positive movement toward regulating this powerful technology.
While considering the risks, let's keep in mind that, overall, artificial intelligence has vast potential, and its beneficial implementation depends on all of us.
Create a website with AI
Ready to boost your online presence with SiteW and AI?
Start right now!