Using Brain's to define custom behaviors

From Multiagent Robots and Systems Group
Jump to: navigation, search

Video of Agents Chasing a Ball

Using Brain's to define custom behaviors for agents

You can define custom behaviors for agents by writing custom implementations for brains. Here is the structure of the RunAtBallBrain.py

from Agent import Agent
from Ball import Ball
from Obstacle import Obstacle
from LinearAlegebraUtils import getYPRFromVector
import numpy as np
from Action import Stun, Kick
class RunAtBallBrain(object):
    '''
    classdocs
    '''


    def __init__(self):      
        pass
    
    def takeStep(self, myTeam=[], enemyTeam=[], balls=[], obstacles=[]):
        actions = []
        deltaPos = np.array([1, 0, 0])
        deltaRot = getYPRFromVector(balls[0].position)
        return deltaPos, deltaRot, actions

The brain contains a takeStep() method which receives the following inputs:

myTeam = A list of agent objects which are in the same team as the agent

enemyTeam = A list of agent objects which are in the opposing team

balls = A list of attractor's or balls in the scene

obstacles = A list of spherical obstacles which the agents should avoid


Each of the above elements contains a position represented in egocentric co-ordinate frame where the agent itself is the origin, forward of the agent is positive x, upwards is positive z, and right is positive y.

The brain returns three things:

deltaPos = a delta position, can be thought of as a direction vector

deltaRot = a delta rotation, represented in degrees in the order [yaw, pitch, roll]

actions = a list of actions the agent needs to perform, these will be covered in a later section


Explanation of steps:

deltaPos = array([ 1, 0, 0]) , we do this to tell the agent to always move forward.

Now, we need to tell the agent to rotate and look at the first ball, thus we need to set the rotation of the agent to be the same as that of a vector going from the agent to the ball. The getYPRFromVector() method in LinearAlegebraUtils.py does this for us,

deltaRot = getYPRFromVector(balls[0].position), since the ball position is already provided as an ego-centric vector, we can use it directly.

The rotation as well as the movement will get clamped by the Agent when it uses it to move. If you run the simulator using the above brain, you should get the following result.