Most of ADAS (Advanced Driver Assistance Systems) have some drawbacks because they do not use all but only some parts of the information on Traffic environment-Vehicle-Driver (TVD). Recently, researches on making more efficient and effective assistant system by fusing all the information from TVD are being executed to overcome this limitation. As a part of this research, this paper focuses on decision-level fusion to estimate the driver's vigilance from the vision information of traffic environment and driver state. The driver state is defined as the tracked gazing direction and face feature points of the driver which is obtained by using the Adaboost face detector and Active Appearance Model (AAM). The state of traffic environment is defined as lane-off or collision from the information of the vehicle's forward area, i.e., lanes, vehicles, and ego-motion. Warnings for lane-off, collision, and driver inattention are generated by fusing these in and out vehicle vision information.