当前位置:首页 期刊杂志

Selection of Support System Type with Dectecting Mismatches between Traffic Situ

时间:2024-08-31

PANG Yu (庞 煜), LAI Xi-de(赖喜德)

1 School of Energy and Environment, Xihua University, Chengdu 611731, China 2 Key Laboratory of Fluid and Power Machinery, Ministry of Education, Chengdu 610039, China

Selection of Support System Type with Dectecting Mismatches between Traffic Situation and Driver Behavior

PANG Yu (庞 煜)1,2, LAI Xi-de(赖喜德)1,2

1SchoolofEnergyandEnvironment,XihuaUniversity,Chengdu611731,China2KeyLaboratoryofFluidandPowerMachinery,MinistryofEducation,Chengdu610039,China

Supposing a typical case of mismatch between human actions and given situation, the appropriate driver support function was investigated in this paper. It was processed by calculating the conditional expectation of damage under different degrees of automation. The typical case was that the situation was detected to be in threat while the driver was detected not to take any corresponding action. Considering various realistic factors, preference order among different levels of automation was probabilistically analyzed. Different levels of automation under action type support were differentiated by human’s reactions to the autonomous safety control actions which were taken by the computer.

adaptiveautomation;behaviorchecking;human-centeredautomation;levelsofautomation;situationmonitoring

Introduction

From the historical data of the fatalities and casualties by traffic accidents in Japan, it is shown that the number of seriously-injured accidents has decreased[1]. However, the number of slightly-injured accidents remains large[1]. For the causes of traffic accidents, 92.6% of car accidents are attributed to human error, such as improper lookout, inattention, and internal or external distraction[2-4]. In order to reduce the number of traffic accidents, it is crucial to develop proactive safety technologies that provide a driver with an appropriate assist in a situation-adaptive manner[5]. If a computer is capable of understanding traffic condition and driver’s psychological and physiological conditions, it knows what the driver is willing to do and whether his/her intended action matches the situation. Thus, it can support human intelligently[6]. Various efforts have been devoted to develop proactive safety technology by improving driver behavior detection and traffic condition monitoring methods[7-10].

Here the question arises: which driver support function is supposed to be provided to the driver? Supposing that mismatches between driver intent and traffic condition are detected by the computer, that is to say, from the view point of the computer, driver may not be aware of a threat under the traffic condition. What type of support should be provided to the driver? Should the support system evoke a warning or execute an autonomous action[5, 11]? If an action is initiated for avoiding the threat, should the autonomous action be taken with visual display and acoustical caution (level 6.5 of automation) or, only visual display (level 7 of automation) or, just complete silence (level 8 of automation)?

The former is called a warning type support. When it is detected by the computer that the driver is not to be aware of the threat, an auditory warning is given to the driver to enhance situation awareness and urge him/her to take safety control action[6]. Once a warning arises, human may choose to implement the action or just cancel and neglect the warning. Humans own the final authority over the automation and this warning type support is fully compatible with the human-centered automation principle[12-13]. Thus, human may fail to avoid the accident if he/she ignores the warning or responds to the unsafe traffic situation with time delay. The latter is called an action type support. When computer determines that the driver is not paying attention to the threat at the time, an autonomous safety control action is executed. Since the computer takes the safety control action automatically, human is not maintained as the final authority. This type of support well dedicates to ensure safety when allowable time is limited for human to take necessary countermeasure or give directive to the machine. However, autonomous action may bring negative consequences such as automation surprise and over-trust of automation[14].

This paper tries to figure out which type of support is better from the probability theoretic point of view. Influence of various levels of automation under action type support are differentiated, such as level 6.5 which executes automatically after telling human what it is going to do, level 7 which executes automatically then necessarily informs human, and level 8 which informs driver after execution only when he asks[15]. Various levels of automation specify the degree to which a task is automated (see Table 1). In fact, the level of automation (LOA) of the warning type is set at 4. Since the driver’s reaction time may be too slow, a high level of automation in decision making can be justified for tasks with high time pressure. For more details on level of automation, we refer to Refs. [15].

The three above mentioned levels of automation are affiliated to action type support (LOA6.5, LOA7, LOA8). It is approved that the acceptance of driver support function is to be varied. Typical mismatch between human actions and given situation is described in Section 1. Conditional expectation of damage is figured out in Section 2. In actual situation, we should consider various circumstances such as driver’s inappropriate situation awareness, computer’s incorrect detection on driver’s action or situation, driver’s physical and mental fatigue struggled with false soft-protection, and driver’s annoyance towards frequent warning. As a result, preference order among different levels of automation in a probabilistic manner is analyzed in Section 3. Section 4 concludes the selection of support system type.

1 Model Description

In the typical case, it is determined by the computer that the driver doesn’t perform any action which is necessary in the situation. Thus, he/she is observed that no corresponding action is initiated to deal with the threatening situation. In the automobile domain, a typical example is that, it is determined by the computer that car driver doesn’t press the brake pedal even if the leading vehicle takes an abrupt deceleration. As is described in Ref.[16], the case can be defined as follows: an emergent deceleration of the leading vehicle is captured by the computer; nevertheless, no action for brake of the host vehicle has been detected yet. Thus, computer decides to initiate some support to the driver to avoid the collision with the lead vehicle.

No matter the actual circumstance is safe (S) or unsafe (U), if the computer monitors the given situation as “safe” (“S”) (such as no rapid deceleration is taken by the leading vehicle), the behavior checking function is not to be started and computer will continue the function of situation monitoring. If the given situation is regarded by the computer as “unsafe” (“U”) (such as, a rapid deceleration is taken by the leading vehicle), driver behavior checking function is to be initiated to check whether human’s countermeasure action is taken to cope with the situation. Computer will still keep situation monitoring if human’s countermeasure action is detected (“A”) (such as, press the brake pedal) under such “unsafe” situation (“U”). Only if computer monitors the given situation as “unsafe” (“U”) and driver’s behavior as no action (“NA”) will it take safety control policy to “avoid an accident” (see Table 2). On the other hand, computer’s observation may be wrong in realistic circumstance. The discrimination of real situation and the computer’s observation in notation is the existence of quotation mark. Therefore, it may incorrectly determine a safe situation (S) as “unsafe” (“U”) for given situation; or, monitor driver’s behavior as taking no action (“NA”) while driver actually actions (A). In the sequel, conditional expected damage (such as damage of accident and damage of annoyance towards frequent warning) will be expressed on the assumption that computer’s judgment may be incorrect.

Table 2 Computer’s state under the judgment

The state of the world is defined as “(state of the traffic situation, state of the driver)”[16]. Considering different types of driver support function, various aspects of damages (such as economical, physical, and mental) may vary from three standpoints: (1) when it is the actual situation that the driver doesn’t initiate any action to correspond with the unsafe situation (in illustration, the driver doesn’t take brake to avoid the collision with the leading vehicle which makes an emergent deceleration), in which the state of the world is (U, NA), to what extent would the driver suffer the damage of accident? (2) when the driver has already taken action to cope with the unsafe situation, in which the true state of the world is (U, A), to what extent would the driver suffer the annoyance due to the unnecessary support? (3) when the actual situation is safe, in which the state of the world is (S, NA) or (S, A), to what extent would the driver suffer the damage of inconvenience caused by providing wrong and inappropriate support?

2 Conditional expectation of damage

2.1 Support by warning

LetZdenote the damage in the process, and Ew(Z) denote the conditional expectation ofZunder warning type support. Warning would be set off if computer observes that driver is not responding to the unsafe situation (“U”, “NA”).

2.1.1 Accident suffered when (U, NA)

Since the true state is that the driver is taking no action to deal with the unsafe situation, accident will be unavoidable if the computer doesn’t set off any warning. That is to say, if computer determines that the driver is taking action to cope with the unsafe situation (“U”, “A”), or computer monitors the situation as safe (“S”), no alarm will be set off, thus, accident will happen. Furthermore, if the computer correctly detects the state of the world and initiates an alarm, there will be a chance that accident can be prevented. However, an accident would still occur if the driver refused to take the action or failed to initiate the action on time based on the alarm. SupposeP(IA|warning) denote the conditional probability of the driver initiating the action (IA) on the basis of warning, andP(“U”, “NA”|U, NA) denotes conditional probability that computer exactly monitors the state of (U, NA) as (“U”, “NA”). It can be deduced thatP(“U”, “NA”|U, NA) is equal toP(“U”|U)P(“NA”|NA) due to the independence of situation monitoring function and behavior checking function. Let Ew(Z|U, NA) denotes the conditional expectation of damage under warning type support when (U, NA), thus

Ew(Z|U, NA)=ZaP(“U”|U)P(“A”|NA)+ZaP(“S”|U)+

ZaP(“U”|U)P(“NA”|NA)×

[1-P(IA|warning)],

(1)

whereZadenotes the damage of accident.

2.1.2 Annoyance suffered when (U, A)

Although the situation is under threat, driver is taking control action to prevent an accident. Doing nothing is the best choice for computer. Nevertheless, if the computer correctly monitors the unsafe situation but wrongly detect the driver’s action (A) as no action (“NA”), an unnecessary warning will be set off. To human who is pressing the brake pedal, an unnecessary warning may cause him/her feel bothering. Let Ew(Z| U, A) denote the conditional expectation of damage under warning type support when (U, A), thus:

Ew(Z|U, A)=ZANP(“U”|U)P(“NA”|A),

(2)

whereZANdenotes the damage of annoyance sustained by human.

2.1.3 Annoyance suffered when (S, NA)

An inappropriate warning is to be aroused when (S, NA) if the computer incorrectly regards the safe situation as unsafe (“U”) and rightly monitor the driver’s behavior (“NA”). Thus, if human initiate the action based on the fault warning, damage of the process would be evaluated by the inappropriate brake. If human correctly regards the warning as a fault one, damage of the process would be human’s annoyance caused by fault warning. Similarly, let Ew(Z|S, NA) denote the conditional expectation of damage under warning type support when (S, NA), thus:

Ew(Z|S, NA)=ZIBP(“U”|S)P(“NA”|NA)
P(IA|warning)+ZANP(“U”|S)
P(“NA”|NA)
[1-P(IA|warning)],

(3)

whereZIBdenotes the damage of taking an inappropriate brake.

Actually, there is little possibility for the driver to initiate the action based on a fault warning, so the value ofP(IA|warning) is interpreted as 0. Thus,

Ew(Z|S, NA)=ZANP(“U”|S)P(“NA”|NA).

(4)

2.1.4 Annoyance suffered when (S, A)

Situation expressed by (S, A) may rarely occur that human hit a strong brake when there is no such threat ahead. However, it may happen when human unconsciously touch the brake pedal under a safe traffic situation. Imagining the case that a warning (“Take the brake! Take the brake!”) arouses when you are puzzled of a sudden brake taken by accident, it is not surprising to see the driver being at a loss under such a circumstance. Fault alarm result in human annoyance. Let Ew(Z|S, A) denote the conditional expectation of damage under warning type support when (S, A), thus:

Ew(Z|S, A)=ZANP(“U”|S)P(“NA”|A).

(5)

Totally, the conditional expectation of damage under warning type support is represented as:

Ew(Z)= Ew(Z|U, NA)P(U, NA)+

Ew(Z|U, A)P(U, A)+

Ew(Z|S, NA)P(S, NA)+ Ew(Z|S, A)P(S, A)=

Za{P(“U”|U)P(“A”|NA)+P(“S”|U)+

P(“U”|U)P(“NA”|NA) [1-P(IA|warning)]}

P(U)P(NA)+ZANP(“U”|U)

P(“NA”|A)P(U)P(A)+

ZANP(“U”|S)P(“NA”|NA)P(S)P(NA)+

ZAN(“U”|S)P(“NA”|A)P(S)P(A).

(6)

2.2 Support by action

Let EA(Z) denote the conditional expectation ofZunder action type support. Autonomous action would be initiated if computer monitors that driver is not responding to the unsafe situation (“U”, “NA”).

2.2.1 Accident suffered when (U, NA)

If computer correctly detect the unsafe traffic situation (“U”) and driver’s no control action (“NA”), autonomous safety control action will be taken to avoid a crash just appropriately. Collision would be inescapable if computer’s judgment is wrong. The conditional expectation of damage under action type support when (U, NA), which is denoted by EA(Z|U, NA), is expressed as:

EA(Z|U, NA)=ZaP(“U”|U)P(“A”|NA)+ZaP(“S”|U).

(7)

2.2.2 Annoyance suffered when (U, A)

If computer miss detecting driver’s control action (“NA”) under an unsafe traffic situation (“U”), an unnecessary safety control action would be taken autonomously. Thus, based on different LOAs, the resulted affection varies. As LOA 6.5 is concerned, auditory and visual information is provided to avoid automation surprise. However, driver would not be free from annoyance due to the acoustic and visual interruption along with the unnecessary safety control action. As well as LOA 7, visual information which displays silently with the control action causes driver’s annoyance. For LOA 8, no information is offered and the unnecessary support can be regarded as assistance to the driver. LetZAN-6.5,ZAN-7, andZAN-8denote the driver’s annoyance under LOA 6.5, 7, and 8, respectively. Thus, the conditional expectation of damage under action type support when (U, A), denoted by EA(Z|U, A), is presented as:

EA(Z|U, A)=ZANP(“U”|U)P(“NA”|A),

(8)

in whichZAN-6.5>ZAN-7>ZAN-8=0.

Moreover, the annoyance under LOA 4 (ZAN-4) would be evaluated the same asZAN-6.5.

2.2.3 Inappropriate brake taken and annoyance suffered when (S, NA) or (S, A)

Computer will initiate inappropriate brake if it incorrectly detect the safe traffic situation as unsafe (“U”) with right monitoring of driver’s no action behavior (“NA”). Additionally, for LOAs 6.5 and 7, annoyance of auditory and visual warning is aroused. For LOA 8, automation surprise may be brought to human for no hint is given before the autonomous sudden brake. Thus, we get

EA(Z|S, NA)=

(ZAN+ZIB)P(“U”|S)P(“NA”|NA),

(9)

EA(Z|S, A)=

(ZAN+ZIB)P(“U”|S)P(“NA”|A),

(10)

in whichZAN-6.5>ZAN-7andZAN-8denotes the annoyance of automation surprise (ZAS).

Thus, total conditional expectation of damage under action type support can be expressed as:

EA(Z)= EA(Z|U, NA)P(U, NA)+

EA(Z|U, A)P(U, A)+

EA(Z|S, NA)P(S, NA)+

EA(Z|S, A)P(S, A)=

Za{P(“U”|U)P(“A”|NA)+

P(“S”|U)}P(U)P(NA)+

ZANP(“U”|U)P(“NA”|A)P(U)P(A)+

(ZAN+ZIB) {P(“U”|S)

P(“NA”|NA)P(S)P(NA)+

P(“U”|S)P(“NA”|A)P(S)P(A)}.

(11)

It’s noteworthy that the value ofZANdiffers under various LOAs.

3 Preference Order

If the computer is always perfect and no errors would be produced in traffic situation monitoring or driver behavior checking, it is more intelligent for human-machine interface designers to choose action type support[16]. If imperfection is assumed for the computer, it can be concluded from Eqs. (1) and (7) that action type support also turns out to be more effective than warning type support when driver faces an accident threat. For the state of (U, A), it may turn out the same damage for warning type and LOA 6.5 of action type. As for human annoyance caused in warning type support is evaluated the same under LOA 6.5 of action type, the following proposition can be deduced.

Proposition 1 Preference order between warning type and LOA 6.5 of action type is situation-dependent: Ew(Z) >EA-6.5(Z) if and only if

ZaP(U, “U”) [1-P(IA|warning)] >ZIBP(S, “U”).

(12)

During the process of comparison, damage under the state of (U, A) and (S, A) are neglected for the rare possibility of P(“NA”|A) in real life situations. From the analysis in the previous sub-section, LOA 7 is able to spare driver from some amount of annoyance rather than LOA 6.5, thus, it gains more possibility that action type of LOA 7 is superior to warning type.

Proposition 2 Ew(Z) >EA-7(Z) if and only if

ZaP(U, “U”) [1-P(IA|warning)]>

(ZIB+ZAN-7-ZAN-4)P(S, “U”).

(13)

For LOA 8, no human annoyance is suffered. However, automation surprise may be encountered.

Proposition 3 Ew(Z)>EA-8(Z) if and only if

ZaP(U, “U”) [1-P(IA|warning)]>

(ZIB+ZAS-ZAN-4)P(S, “U”).

(14)

It is apparent from the right side of Eqs.(12) and (13) that adoption of LOA 7 would be wiser than LOA 6.5. As a result, annoyance under warning type support (ZAN-4) is more server than that under LOA 7. As far as LOA 8 is concerned, automation surprise may result different for diverse groups of human. Compared with those who are familiar with the support system, those who are not that familiar may be more shocked. Generally, inconvenience caused by wrong detection would decrease human’s trust for the support system.

4 Conclusions

In this paper, typical examples in automobile domain are analyzed on behalf of the mismatch between traffic situation and driver behavior. Driver is detected not to take ang corresponding action to the unsafe situation. Based on the model, the preceding paragraphs analyzed the conditional expectation of damage under warning type support and action type support, as well as the preference order between different levels of automation.

Furthermore, it is important for the human-machine interface designers to take more of human’s senses (such as accept level of support system) into consideration. Here are the illustrations: if we consider warning type support, human may feel annoyed when warning is aroused frequently. Human may divert attention from driving to some other things for his loss of authority over the automation. If the status continues, drivers may choose to invalid the support system. Thus, the selection of support systen type is also related to the driver type.

[1] Janpanese Ministry of Land, Infrastructure, Transport and Tourism. Traffic Accident Fatalities and Casualties [DB/OL]. (2002)[2012]. http://www.mlit.go.jp/road/road_e/contents02/2-2-1.html.

[2] Treat J R, Tumbas N S, McDonald S T,etal. Tri-level Study of the Causes of Traffic Accidents: Final Report [R]. Indiana University, 1979. [Publication No. DOT-HS-034-3-535-77 (TAC)].

[3] Scerbo M W. Theoretical Perspectives on Adaptive Automation [M]//Parasuraman R, Mouloua M. Automation and Human Performance: Theory and Applications, Erlbaum, Mahwah: Lawrence Erlbaum Associates, 1996: 37-63.

[4] Inagaki T, Itoh M, Nagai Y. Driver Support Functions under Resource-Limited Situations [J].JournalofMechanicalSystemsforTransportationandLogistics, 2008, 1(2): 213-222.

[5] Inagaki T, Itoh M, Nagai Y. Support by Warning or by Action: Which is Appropriate under Mismatches between Driver Intent and Traffic Conditions? [J].IEICETransactionsonFundamentalsofElectronics,CommunicationsandComputerSciense, 2007, E90-A(11): 264-272.

[6] Inagaki T. Smart Collaborations between Humans and Machines Based on Mutual Understanding [J].AnnualReviewsinControl, 2008, 32(2): 253-261.

[7] Inagaki T. Towards Monitoring and Modelling for Situation-Adaptive Driver Support Systems [M]//Cacciabue P C. Modelling Driver Behaviour in Automotive Environments: Critical Issues in Driver Interactions with Intelligent Transport Systems, London: Springer Verlag, 2007: 43-57.

[8] Akamatsu M, Sakaguchi Y, Okuwa M. Modeling of Driving Behavior When Approaching an Intersection Based on Measured Behavioral Data on an Actual Road [C]. Proceedings of Human Factors and Ergonomics Society Annual Meeting, Denver, Colorado, USA, 2003: 1895-1899.

[9] Engstrm J, Hollnagel E. A General Conceptual Framework for Modelling Behavioural Effects of Driver Support Functions[M]//Cacciabue P C. Modelling Driver Behaviour in Automotive Environments: Critical Issues in Driver Interactions with Intelligent Transport Systems. London: Springer Verlag, 2007: 61-84.

[10] Cacciabue P C, Hollnagel E. Modelling Driving Performance: a Review of Criteria, Variables and Parameters [C]. Proceedings of the International Workshop on Modelling Driver Behaviour in Automotive Environments, Ispra, Italy, 2005: 185-196.

[11] Inagaki T. To What Extent May Assistance Systems Correct and Prevent “Erroneous” Behaviour of the Driver? [M]. Cacciabue P C, Hj ä lmdahl M, Luedtke A,etal. Human Modelling in Assisted Transportation: Models, Tools and Risk Methods. London: Springer Verlag, 2011: 33-41.

[12] Sheridan T B, Parasuraman R. Human-Automation Interaction [J].ReviewsofHumanFactorsandErgonomics, 2006, 1(1): 89-129.

[13] Sheridan T B. Humans and Automation: System Design and Research Issues [M]. New York: John Wiley and Sons, Inc., 2002.

[14] Inagaki T. Design of Human-Machine Interactions in Light of Domain-Dependence of Human-Centered Automation [J].CognitionTechnology&Work, 2006, 8(3): 161-167.

[15] Inagaki T, Moray N, Itoh M. Trust, Self-confidence, and Authority in Human-Machine Systems [C]. Proceedings of the IFAC Human-Machine Systems, Kyoto, Japan, 1998: 431-436.

[16] Inagaki T, Sheridan T B. Authority and Responsibility in Human-Machine Systems: Probability Theoretic Validation of Machine-Initiated Trading of Authority [J].Cognition,TechnologyandWork(SpecialIssue), 2012, 14(1): 29-37.

National Natural Science Foundation of China (No. 51379179); the Open Research Subject of Key Laboratory (Research Base) of China (No szjj2014-046); Key Scientific Research Fund Project of Xihua University, China (No. Z1320406)

1672-5220(2014)06-0856-04

Received date: 2014-08-08

*Correspondence should be addressed to PANG Yu, E-mail: pangyuxhu@gmail.com

CLC number: TP181 Document code: A

免责声明

我们致力于保护作者版权,注重分享,被刊用文章因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理! 部分文章是来自各大过期杂志,内容仅供学习参考,不准确地方联系删除处理!