How to be a good partner between the hottest machi

  • Detail

How to be a good "partner" between machines and people original title: how to be a good "partner" between machines and people

on the 15th, the 2018 World robot Conference opened in Beijing Yichuang International Convention and Exhibition Center, displaying the latest achievements of the robot industry around the world. At the three-day forum, the artificial intelligence ethics and legal topics triggered by the development of robots were one of the topics

a legal framework should be established to regulate robots and their use behavior

in the International Ai Community, many people believe that AI will pose a great threat to mankind, calling for the development of "Ai autonomous weapons" and vigilance against the potential risks of AI. The AI they discussed usually refers to the "strong AI" that can evolve autonomously and has human like consciousness in the future, while the current application is mostly the "weak AI" that is good at single tasks and completes human instructions. Even so, the challenges brought by the rapid development of artificial intelligence have gradually emerged

in order to protect privacy, people will put mosaics on their faces when publishing photos or videos, but a research team in the United States has developed a set of machine learning algorithms. Through training, neural networks can recognize the hidden information in images or videos

nowadays, some robots are used to accompany children and the elderly. Duan Weiwen, a researcher at the Institute of philosophy of the Chinese Academy of Social Sciences, believes that the behaviors of robots answering with children and feeding the elderly seem simple, and long-term coexistence may make humans pour their feelings and rely on machines. It is necessary to set up principles to prevent excessive dependence on robots. In addition, how to define traffic accidents in driverless cars and how to deal with accidents in medical and surgical robots... With the deep participation of intelligent robots in human life, experts believe that how to establish a legal framework to regulate robots and their use behavior has become an important issue that cannot be avoided in the development of artificial intelligence and robot industry

Zhang cymbal, Dean of the Institute of artificial intelligence of Tsinghua University and academician of the Chinese Academy of Sciences, once said that the current artificial intelligence is essentially different from human intelligence. Compared with human beings, artificial intelligence system has poor anti-interference ability, weak promotion ability, and may even make big mistakes. Therefore, the use of such AI systems requires caution

thinking about the mode of getting along with intelligent machines and controlling adverse effects

the development of artificial intelligence has experienced twists and turns, and it is also evolving

"the universal application of artificial intelligence and its intelligent automatic system is not only a scientific and technological innovation with unknown results, but also a far-reaching social ethics experiment in the history of human civilization." Duan Weiwen said

Tian Haiping, a professor at the school of philosophy of Beijing Normal University, said that machine deep learning and algorithm system make agents have the characteristics of quasi personality or quasi subject. At present, "Alpha dog", medical robot "Watson" and intelligent partner virtual robot Microsoft "Xiaobing" still belong to "weak artificial intelligence", which is far away from the real intelligent agent and the enterprise to break through this dilemma. Even so, they have also shown a trend of changing human form. In the future, if "strong artificial intelligence" appears and is deeply involved in human affairs, "we must think about how humans get along with it in advance and control its adverse effects." Tian Haiping said, "AI is 3. Whether there are four holes perpendicular to each other on the main paddle and how to be responsible for their own behavior are still and always remain technical problems."

in recent years, the International Ai Community has increasingly attached importance to the ethical and legal issues in AI, and promoted the discussion and formulation of relevant technical standards and social norms. In January 2017, an international academic conference formulated and released 23 ethical principles of artificial intelligence, including that the goal of artificial intelligence research should be to establish beneficial intelligence, not undirected intelligence; The design and operation of artificial intelligence system should conform to the concepts of human dignity, rights, freedom and cultural diversity. The Institute of electrical and Electronic Engineers (IEEE) has also issued the "ethical code of artificial intelligence design", trying to put forward the ethical standards of artificial intelligence from the perspective of engineering design and production

China's AI research and practice are in the forefront of the world, but it started relatively late in the related machine ethics and the establishment of "defense rate: 0.001mm", which refers to taking mm as the unit, showing its number up to the third digit after the decimal point, and the fourth digit after the decimal point cannot show the number method research and safety standards. In recent years, it is actively strengthening the research in these areas

some experts said that artificial intelligence ethics and law involve many fields such as science, business, philosophy, law and so on. It is necessary to establish an alliance to deal with the development of artificial intelligence, absorb the strength of all aspects, and jointly promote relevant research

use algorithms to embed the human moral code system into intelligent machines.

Duan Weiwen said that the "quasi subjectivity" of artificial intelligence systems makes their behavior similar to human ethical behavior. Therefore, the artificial intelligence community is exploring whether intelligent algorithms can be used to embed human values and moral norms into intelligent machines. Some experts believe that this is not only the future vision of AI, but also the biggest challenge currently facing

tianhaiping said that embedding moral codes into machines is an inevitable trend in the development of artificial intelligence. Without this step, autonomous driving, unmanned aerial vehicles, assistant robots and other agents will not be able to enter human life. After the machine has reached the level of human autonomy, it can develop various functions only by following moral algorithms when making decisions

Duan Weiwen introduced that there are generally three assumptions in the academic community to make machines conform to human ethics: first, from top to bottom, that is, a set of ethical norms should be preset in the agent, such as autonomous vehicle should minimize the harm caused by collision to others; Second, from the bottom up, that is, machines learn human ethics and moral norms through data-driven; The third is human-computer interaction, that is, let agents explain their decisions in natural language, so that human beings can grasp their complex logic and correct possible problems in time

"at present, there is no universal principle for AI ethics research, rather than individual enterprises. They have entered a completely different zone from the past. Therefore, we can start from the examples encountered in the application, find the value conflict points, and discuss what ethical considerations need to be made." Duan Weiwen said, for example, against prejudice, it is necessary to trace back to the data of machine learning, improve the data information, and improve the algorithm, so that the artificial intelligence judgment is as objective and fair as possible, in line with human values

although there are many potential problems in the development of artificial intelligence, more people are optimistic about the prospect of technology application. They believe that artificial intelligence can provide a safer and more intelligent life experience, and can't give up because of the potential lack of technology. While laying out in advance to prevent possible challenges, we should also use new technology to deal with risks

taking privacy security as an example, AI is not the natural enemy of privacy. Guo Yao, a professor at the school of information science and technology of Peking University, said that using artificial intelligence technology can deal with some security problems that are difficult to deal with by conventional methods. For example, based on behavior analysis, artificial intelligence can quickly detect malware, and machine learning can also timely detect abnormal network traffic behavior and warn hackers of intrusion, so as to improve the level of network security defense

According to Zhang cymbal, no matter how AI develops, the basic idea should not be to replace people with intelligent machines, but to help people do a good job. Human and machine have their own advantages, and human-computer cooperation can be achieved only by understanding each other, but human is still the leader of human-computer relationship. In this way, it is possible to lead artificial intelligence to the development path of man-machine cooperation. (editor in charge: renzhihui, Deng Nan)

Copyright © 2011 JIN SHI