Are there real risks for human beings?
We are all well aware of common fears about job loss because of AI spread and many reports follow this path, listing the kind of jobs that will be soon replaced.
At the same time, even without acknowledgement, everyone already takes advantage of AI algorithms’ power in everyday life, while choosing movies to watch on TV, or music to listen to, or while archiving photos and interacting with voice assistants or AI chatbots. Technology is at our hands, never so close and common than today, we use it and we fear it at the same time.
Robots may not hurt
Already in the 40’s Asimov theorized that “a robot may not injure a human being, or, through inaction, allow a human being to come to harm”, as a law that could be extended to any automatic machine. Last year one of the giants in the AI market – Google – created an artificial intelligence ethical code meant to prevent the development of autonomous weapons (killer drones) that could “cause or directly facilitate injury to people”.
Google also stated that the use of AI should benefit society, respect privacy, be tested for safety, be accountable to the public and more. This would take to two considerations
Human accountability
First of all, as usual, the power of AI is in the hands of human beings: any wanted misuse can be blamed on them, both as concern programming mistakes or bad neural networks training. You may have read about the first fatal crash in Arizona in 2018 when a self-driving car killed a woman: it has been determined that it could be accounted to how the software was tuned.
While projecting AI systems the impact of any possible “false positive” has always to be clearly considered, as well as the question of safety since often the current systems go wrong in unpredictable ways. Human accountability is there in addressing the goals and in the consideration of possible negative outcomes.
Sharing strategies
Second, there’s a growing attention to opportunities and risks related to AI and a growing community of AI researchers that is developing shared strategies and policies (as Google did) in order to allow the benefits of AI advances to be safely released and made available.
Also from different parts these topics are gaining increasing attention and raise business, political, ethical and even religious discussions about it, that would drive a better understanding and regulation of the matter.
What about jobs
So going back to our main fear about job loss and trying to think positive we will consider, as the World Economic Forum does, that “automation will displace 75 millions jobs but generate 133 million new ones worldwide by 2022”. A lot of new AI-related jobs will be created and this will offset the impact of automation, although in some economies investments will be needed to reduce risks of job shortage.
Again it’s up to human beings: people will be forced to adapt to new working models and become learners of new skills, in order to fit for the future.
Adapt and learn must become our mantra in AI time where the main difference from machines look to be the lack of their ability to face unexpected situations.
Was this post useful for you?