New Ascend Member. Welcome Slamper!

Ascend is very excited to introduce our newest member, Hendrik Hofstadt (slamper), a Software Engineer. Hendrik, of Germany, brings an impressive resume to Ascend and the Lisk Community. Hendrik was…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Is Artificial Intelligence a Threat to Humanity?

A.I. brings impressive applications, with remarkable benefits for all of us; but there are notable unanswered questions with social, political or ethical facets.

The main concerns focus on technological unemployment, while equally important issues arise regarding potential applications of AI, the access to data and outputs of AI models. Most of the ‘dystopia scenarios’ are inspired by the following:

The concept of an autonomous, smart machine is impressive — think for a moment an autonomous car, which can capture its environment and dynamics and make real-time decisions, to serve in the possible best way a predefined objective: to move from point A to B.

In a military context this autonomy in decision-making becomes scary: the so-called Lethal Autonomous Weapons, refer to advanced robotic systems of the future, which will be capable of hitting targets without human intervention or approval.

But, who will be controlling the design, operation and target assignment to killer robots? How such a robot will be able to understand the nuances regarding a complex situation and make life-threatening decisions?

AI systems learn by analyzing huge volumes of data and they keep adapting through continuous modeling of interaction data and user-feedback. How can we ensure that the initial training of the AI algorithms is unbiased? What if a company introduces bias via the training data set (intentionally or not) in favor of particular classes of customers or users?

We must ensure that these systems are transparent regarding their decision-making processes. This will allow troubleshooting particular cases while supporting the general understanding and acceptance by the wider audience and the societies.

In our interconnected world, a small number of companies are collecting vast amounts of data for each one of us: access to this consolidated data would allow an accurate replay of our day-to-day life in terms of activities, interactions and explicitly stated or implicitly identified interests; somebody (or something) could know our mobility history and patterns, our online search and social media activity, chats, emails, and other online micro-behaviors and interactions.

If you think of this AI output at scale — analyzing data at the population level — these predictions and insights could describe the synthesis, state, and dynamics of an entire population. This would obviously provide extreme power to those controlling such systems over this wealth of accumulated data.

The right to privacy is under threat, obviously when you consider the possibility of unauthorized access to one’s online activity data. But even in the case of an offline user — somebody who has deliberately decided to stay ‘disconnected’ — the right to privacy is still under threat. Imagine this disconnected user (no smartphones or other devices aware of the user’s location) moving through a ‘smart city’.

There are obvious big questions on who has access to this information and under what conditions.

This is a critical aspect — if somebody compromises a smart system, for instance, an autonomous car, the consequences can be disastrous. Security of intelligent, connected systems against unauthorized access is a major priority.

This is the unemployment which is ‘explained’ by the introduction of new technologies — the jobs replaced by intelligent machines or systems. In the years to come, we will witness significant changes in the workforce and the markets — roles and jobs will become obsolete, industries will be radically transformed, employment models and relationships will be redefined.

At the same time, technology will drive the formation of new roles, positions or even scientific specializations, while allowing people to free-up time from monotonous, low-value work, hopefully towards more creative activities.

AI automates processes and can make critical decisions in a real-time mode. Although in most of the cases the right decision is objectively determined and generally accepted, there are several examples raising ethical and moral issues.

For instance, an autonomous car that knows it is about to hit a pedestrian must decide if it will try to avoid the sensitive pedestrian via a risky (to its passengers) maneuver. And this needs to be decided in milliseconds.

The logic behind these edge decisions must be predefined, well-understood and accepted; at the same time, the detailed history of activity and decision-making of the autonomous car must be accessible and available for analysis — under certain data protection rules.

Technology giants are investing heavily in regard to artificial intelligence, both at the scientific/engineering and also at the commercial and product development level.

These big players have an unmatched advantage when compared to any ambitious competitor out there: the massive data sets describing a wide range of human activity (searches, communication, content creation, social interaction and more), in many different formats (text, images, audio, video).

As these companies try to establish a leading position in this new, under formation, AI-driven market, they acquire any tech/AI startup that manages to present promising technological innovation.

This technological revolution brings great opportunities for prosperity and growth — we just need to somehow ensure that the technology will be applied and used in the right direction.

Key steps in the right direction are already happening — including the discussion for banning ALWs and also the explainable AI (XAI) and the ‘right to explanation’ which allow understanding the models used for artificial intelligence (and how they make particular decisions — which is also required by the European Union GDPR — General Data Protection Regulation).

Add a comment

Related posts:

The Power Of Always Saying Less Than Necessary

Several years ago I had an interview for a research position at the university I was attending. The Professor I was going to speak with was known to be a very stern man. A friend of mine had…