Using Brain's to define custom behaviors

From Multiagent Robots and Systems Group
Revision as of 02:57, 27 March 2015 by Arindam (Talk | contribs) (Created page with "== Using Brain's to define custom behaviors for agents == You can define custom behaviors for agents by writing custom implementations for brains. Here is the structure of the...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Using Brain's to define custom behaviors for agents

You can define custom behaviors for agents by writing custom implementations for brains. Here is the structure of the RunAtBallBrain.py

from Agent import Agent
from Ball import Ball
from Obstacle import Obstacle
from LinearAlegebraUtils import getYPRFromVector
import numpy as np
from Action import Stun, Kick
class RunAtBallBrain(object):
    '''
    classdocs
    '''


    def __init__(self):      
        pass
    
    def takeStep(self, myTeam=[], enemyTeam=[], balls=[], obstacles=[]):
        actions = []
        deltaPos = np.array([1, 0, 0])
        deltaRot = getYPRFromVector(balls[0].position)
        myTeamName = myTeam[0].team.name
        for agent in enemyTeam:
            if myTeamName == 'A':
                actions.append(Stun(agent, 10))
        for ball in balls:
            actions.append(Kick(ball, np.array([1, 0, 0]), 100))
        return deltaPos, deltaRot, actions

The brain contains a takeStep() method which receives the following inputs:

myTeam = A list of agent objects which are in the same team as the agent

enemyTeam = A list of agent objects which are in the opposing team

balls = A list of attractor's or balls in the scene

obstacles = A lost of spherical obstacles which the agents should avoid


Each of the above elements contains a position represented in egocentric co-ordinate frame where the agent itself is the origin, forward of the agent is positive x, upwards is positive z, and right is positive y.

The brain returns three things:

deltaPos = a delta position, can be thought of as a direction vector

deltaRot = a delta rotation, represented in degrees in the order [yaw, pitch, roll]

actions = a list of actions the agent needs to perform, actions are objects of two classes, Kick and Stun


Syntax:

Kick('ball to kick', 'egocentric vector of direction to kick in', 'kick intensity')
Stun('agent to stun', 'duration in seconds')

'ball to kick' and 'agent to stun' are the references to the objects passed in balls and enemyTeam.

In the above example we want team A to be able to stun agents in team B. So, we loop over the enemy team list and add Stun objects to actions list for each agent in the enemyTeam list only if the team the agent is in, is team 'A'. Similarly, we loop over all the 'balls' in the balls list and add a Kick object to the actions list for every ball in the list. Kicking also requires the ball to be dynamic so that it can be moved with physics rather than manually as before, thi sis shown in the next section.

Enabling a physics based ball

When you create a ball in the simulator you can now mark it to be dynamic as follows.

ball = Ball(array([0, 0, 0]))
ball.isDynamic = True

Now in the fixedLoop() function you can call the ball's updatePhysics() method so that it is moved with physics. This is done as follows:

for ball in self.world.balls:
ball.updatePhysics(self.world)

The updatePhysics() method needs a reference to the world so that the ball knows the bounds and obstacles of the world so it can bounce off them. Thats it!