Document Type : Original Article

Authors

1 Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran

2 Prof., Department of Electrical and Computer Engineering, University of Zanjan, Zanjan, Iran

3 Ph.D., Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran, Iran

Abstract

The Incident Command System (ICS) requires agents who routinely deal with a large number of search and rescue tasks. In addition, responses must operate in highly uncertain and dynamic environments where new tasks emerge and may spread across the disaster landscape. Therefore, finding the optimal and correct allocation to complete all activities in a reasonable time is a big computational challenge.

In Iran, the Incident Command System acts asistematic and lack of intelligence, and the presence of a correct decision-making system that has functionality and makes correct and quick decisions is very important.

This article presents a method for solving the allocation problem, which is a distributed constraint optimization problem. This method uses Markov decision techniques and learning techniques such as learning automata.

The results of simulations and experiments show that the existence of the learning technique and the decentralized behavior of agents can replace the past methods and compensate for the lack of previous methods. The proposed method can perform 85% better than the centralized method and previous methods and is much better in terms of convergence and time.

Keywords

Main Subjects