Every day, artificial Intelligence (AI), is encountered.
AI refers to computer systems that can perform tasks normally performed by human intelligence. AI determines which top results you will see when you search for something online.
An AI algorithm will also determine recommendations from streaming or shopping websites. These algorithms will use your browser history in order to locate items you might be interested.
Science fiction tends to portray AI as super-intelligent robotics that will take over the world. Targeted recommendations aren’t particularly exciting. This scenario may one day be possible, according to some people. Notable figures such as the late Stephen Hawking have expressed concern over how AI will affect humanity in the future.
A study where 11 experts in AI/Computer Science were asked, “Is AI a threat to humanity?” 82% of respondents agreed that AI is not an existential danger. Here’s what we discovered.
Are we really that close to creating an AI that is smarter than us?
The current AI can be classified as either ‘weak’ or narrow’. It is used in many applications, including facial recognition, self driving cars, and internet recommendations. Because these systems are limited in their ability to learn and perform specific tasks, it is called ‘narrow’.
These tasks are often performed better than humans. Deep Blue is the most famous AI to defeat a world champion in chess. However, they can’t apply their learning to any other task than chess (Deep Blue can only play chess).
Another type of AI is Artificial General Intelligence. AI that imitates human intelligence includes the ability to think and apply intelligence on multiple problems. Some people believe that AGI is inevitable and will happen imminently within the next few years.
Matthew O’Brien from Georgia Institute of Technology says that the long-awaited goal of a general AI is not in sight. It’s not clear how we can make general, adaptable intelligence.
Myths of Controversy
Another misconception is that only luddites are concerned about AI and advocate safety research. During a talk in Puerto Rico,Stuart Russell, author of the standard AI textbook mentioned this to the crowd. They laughed out loud.
Another misconception is that AI safety research support is highly controversial. People don’t have to believe that AI safety risks are too high. They just need to see that there is a low chance of the house burning down.
The debate about AI safety may have been made more controversial by the media than it actually is. Fear sells and articles that use out-of-context statements to declare imminent doom are more popular than balanced and nuanced ones.
Two people who know only about one another’s positions through media quotes are more likely to believe they don’t agree than they actually do. A techno-skeptic may believe that Bill Gates is a super intelligent person if he only knows about his position in a British tabloid.
A beneficial-AI activist who doesn’t know anything about Andrew Ng’s position, except for his statement about Mars overpopulation, may misunderstand that he doesn’t care about AI safety. Ng’s timeline estimates are more lengthy, so he naturally prioritizes short-term AI problems over long-term ones.
AI bias and increasing socio-economic inequalities
Another reason people are concerned is the increase in socioeconomic inequality caused by AI-driven job. Work has been an important driver of social mobility for decades. Research has shown that people who are out of work, especially if it’s repetitive and predictable, are less likely to seek or get other retraining jobs that have higher salaries. However, not all believe this.
AI bias can also be harmful in various forms. Olga Russakovsky, a Princeton computer science professor, said that it extends beyond gender and race. Data and algorithmic bias, which can also be called “programmatic bias”, are two other factors that could contribute to the problem.
Timnit Gebru, a Google researcher, stated that the root of bias was social and not technological. She also noted that scientists like her are “some of most dangerous people in this world because we have the illusion of objectivity.”
Technologists aren’t the only ones voicing concern about AI’s potential socio-economic pitfalls. Pope Francis, along with journalists and politicians, are speaking out — and he’s more than just wistful Sanctus. Francis warned at a Vatican meeting entitled “The Common Good In the Digital Age” that AI can “circulate tendentious views and false data that could harm public.”
Millions of people are manipulated and have debated, to the point that they threaten the institutions that ensure peaceful civil coexistence.
He said, “The mentality is that if we can do it we should try it; and let’s watch what happens.” “And if it’s possible to make money from it, we’ll do lots of it.” But technology is not the only one that does this. This has been happening for a long time.
The Most Interesting Controversies
We should not waste time on the myths mentioned above. Let’s instead focus on the interesting and true controversies, even if some experts aren’t agreeing with us.
Which kind of future do YOU want? What do you want to see happen with job automation, What career advice would your give to today’s children?
Are you more comfortable with new jobs or an unemployed society that allows everyone to live a life of luxury and machine-produced wealth?
Do you want to make superintelligent lives and spread them throughout the cosmos? We will be able to control the intelligent machines, or they will control us.
Intelligent machines will replace, coexist, or merge with humans. In the age of artificial intelligence, what does it mean to be human? What do you want it to be? And how can we make it happen in the future? Join the conversation and tell us what you think.