Sunday, November 21, 2010

Reading Response 7---Norman Chapter 7: The Future of Robots


In this chapter, Normal introduces Isaac Asimov’s three laws of robotics (or it is four, because he later on add a fundamental law as the zeroth law).
        
Zeroth law: A robot may not injure humanity, or, through interaction, allow humanity to come to harm.
Asimov didn’t put forward this law in his early state is because at that time, robots are far from capable to harm humanity. And it is still a complicated social issue that we haven’t got there yet. But we need this law because it is fundamental and someday the robots are intelligence enough that may capable of injuring humanity.

First law: A robot may not injure a human being, or, through interaction, allow a human being to come to harm, unless this would violate the Zeroth Law of robotics.
The author states that this law could be labeled “safety”(P197). With this law, as the author illustrates with examples, nowadays machines, simple or advanced, are outfitted with safeguard systems. For example, the sensors that detect human or obstacle nearby to prevent machines run into human causing harm.
It is easy to understand the first part of the law; however, the patter part, “do not allow harm through interaction”, is a little bit difficult to understand. And, according to the author, is difficult to implement. This would be a reflective level implementation (P198).

Second law: A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First law.
As author states, this is all about obeying people. The robots today are not advanced enough to possess independent mind to obey or object orders from its owner. When we press the remote control, it will perform but not “obey” at its literal meaning. It is just how it is programmed to do. Today if the machine is not function as human’s will, it is just considered something wrong with it and is sent to repair. But in the future, machines are likely to have self-awareness and emotions that really can obey or object. At the moment, we do not need to worry about the order being conflicted yet because the machines are not that intelligent. 

Third law: A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.
Unlike laws one and laws two, this law is really relevant to our daily life and machines are indeed designed to fulfill this law. Except for the examples of low battery shut down protection, we know that a machine will automatically shut down when it is overheat, for example, the hair dryer. However, again, these protections are only on its surface level.  It will more complicated in the future if robots have independent minds and emotions. And it is likely that that kind of self-destruction may harm human, for example, robots are decide to self-explode to destroy itself and this may harm the humans near it. And if it is happen in a large scale, or the bombs are nuclear, this will cause human extinction.

Asimov was ahead of his time or even nowadays. But as technology advancing, robots are likely to bring in ethical and moral issues. As in reality, the first concern is that the deployment of robots in more working position may cause unemployment for human. Indeed it is a social issue worthy to think over as unemployment may consequently cause social morale depress or even turmoil.  But the author offers a convincing evidence by saying that: Throughout history, each new wave of technology has displaced workers, but the total result has been increased life span and quality of living for everyone, including, in the end, increase jobs---although of a different nature than before (P207).
When talking about emotion affection for future robots, there is one movie particular I want to discuss. A.I. (artificial intelligence) is a movie about a robot boy (David) who love its mother (or owner) that want to become a real boy and be with his mother forever. Unlike other A.I. in the movie, David has the emotion of love, of which are considered the most complicated and acquirable emotion.  What triggers me most is the love emotion that David possessed.  I can accept and understand that future robots are intelligent, self-aware, super-power and emotional. But love? Scientists consider language is the attribute to distinguish human and animals. And I believe Love is the attribute to differentiate human and robots.  It they have the affection of love, then truly, serious social ethic problem will occur. I am not talking about the love between human and robot pats but the love between human and humanoid robots.

Robots are created to facilitate human lives. Look at where we are today, we indeed have a better life as pressing a button can safe times, labors. With advanced Technology robots can operate dangerous work field where human lives are at stake. But taking everything into account, I totally agree with the author that technology is a two-edged sword always combining potential benefits with potential deficits (P211).

Thursday, November 11, 2010

Reading Response 6 Vicente Chapter 7 Management Matters: Building Learning Organizations

In this chapter, Vicente illustrates the human-tech ladder in organizational level, of which is the forth level of the ladder. Similar to the team level, organization acquire communications and corporations that insure the system operating effectively and efficiently. However, differ from other levels in human-tech ladder, the author found several new factors that are unique in organizational level. They are systems of incentives, disincentives, staffing levels, management structures, information flows across teams and organizational cultures. (P188) From this we can see the organizational level is an enhancement of team level which takes a higher level of communications and corporations.

Within an organization, according to the author, is made up by several teams that are responsible for different targets. In each team is the individual performance of physical and psychological level that keeps the task going. Meanwhile, the information travels internally within a team. If the operating role is a team then this is a standard procedure that guarantees the task accomplished successfully. However, as the author states, “organization as a whole can flounder miserably if the various teams pursue conflicting objectives”. (P189) This is why I said organization level is an enhancement of team level. Within an organization, the information flows need to go across teams---not only travel internally within a team, but also transfer externally among teams.

Giving the example of the Challenger explosion, the failure occurred because of the disagreement between managers and engineers had been compromised due to the “authority pressure”. Though the engineers are responsible to the technical part of the shuttle, but the final decision of launching is in the hands of the managers who are responsible to the whole project. In an organization, more than often, the engineers are only considered as the technical experts who are in the role of “advisors”. However, as we all known from our experience in business procedures, advisors are pretty much an empty word. Though they are expertise for a certain discipline but they are voiced are seldom head by the operating individuals. It is not difficult to imagine why this will happen. As in a business circumstance, the most important component is money that keeps the organization operating or more realistically makes benefits to the giants. In the Challenger incidence, the delay of launching may result in huge loss of capital, enormous reports and explanations to the public---all of these are the company endeavor to avoid. From this we can see it was not surprise the company decided launching the problematic shuttle on schedule. The unheard voice of engineers and the irresistible urge to escape responsibility contribute to the challenger catastrophe.

The challenger explosion reflects the fact that, in most technological organizations, management and engineering design are treated as if were two entirely separate things. (P189) The example of a engineer encounter difficulties when he step in higher level—management, of the organization best illustrate the embarrassment of the gap between engineering design and management. Jeff Skoll donated millions of dollars to his alma mater to initiate an innovative dual-degree engineering/management program for the reason that to bridge this gap. What I see in this case, the foundation of the problem may be solved by specially designed curriculums. However, this gap is not only exists in the technology field but also the others. For curriculum designer, this should be their responsibility to bridge the gap in every discipline.