Predator and Prey

From Multiagent Robots and Systems Group
Jump to: navigation, search

Video of Predator Prey

Implementing a Predator Brain

We want the predators to do the following:

1. Find the nearest not-stunned prey and chase it and stun it.

2. Avoid Obstacles along the way

3. Not run into other predators

We implement helper methods in a new file called NavUtils.py:

findNearestUnstunned(team) : Find's the nearest not-stunned agent in the provided team

getObstacleAvoidance(obstacles) : Uses the "Field" technique in BOIDS algorithm to provide a vector which points away from the obstacle, this vector grows as the agent gets closer to the obstacle.

getTeamAvoidance(team) : To avoid members ob my own team, checks in a specific radius around agent and returns a vector which makes the agent move away.

total movement = 1.5*stunMovement + 0.5*avoidObstacleMovement + 0.5*avoidTeamMovement

We then implement the following predator brain:

from Agent import Agent
from Ball import Ball
from Obstacle import Obstacle
from LinearAlegebraUtils import *
import numpy as np
from Action import Stun, Kick
import random
from NavUtils import getObstacleAvoidance, findNearestUnstunned, getTeamAvoidance
class PredatorBrain(object):
    '''
    classdocs
    '''


    def __init__(self):      
        pass
    
    def takeStep(self, myTeam=[], enemyTeam=[], balls=[], obstacles=[]):
        actions = []
        deltaPos = np.array([1, 0, 0])
        nearestUnstunnedAgent = findNearestUnstunned(enemyTeam)
        stunMovement = array([0, 0, 0])
        if nearestUnstunnedAgent.isStunned:
            stunMovement  = array([0, 0, 0])
        else:
            stunMovement = nearestUnstunnedAgent.position

        avoidObstacleMovement = getObstacleAvoidance(obstacles)
        avoidTeamMovement = getTeamAvoidance(myTeam)

        movement = normalize(1.5*stunMovement + 0.5*avoidObstacleMovement + 0.5*avoidTeamMovement)
        deltaRot = getYPRFromVector(movement)
        actions.append(Stun(nearestUnstunnedAgent, 5))

        return deltaPos, deltaRot, actions

After calculating the weighted sum of the various movements, we tell the agent to rotate and align with the required movement.


Implementing a Prey Brain

We want prey agents to to be restricted so that they do not run away indefinitely outside of the world bounds.

To do this we use a different type of agent called RestrictedAgent, this agents constructor also accepts distance, if the restricted agent goes farther than this distance from the world origin it will stop moving.

We want prey to obey the following rules:

1. Run away from nearest predator

2. Avoid Obstacles

3. Avoid hitting the virtual boundary and turn when you get close two it.

We implement two additional helper methods in NavUtils.py:

getTeamNearestAvoidance(team) : Returns a vector which points away from the nearest agent in team, this vector grows as the nearest agent gets closer

getRestrictionField(originObject, restrictionRange) : Creates a vector what pushes the agent away as it get closer to the restriction range, it needs an object placed at the origin and the restriction range.

Using the above helper methods, we implement the following prey logic.

from Agent import Agent
from Ball import Ball
from Obstacle import Obstacle
from LinearAlegebraUtils import getYPRFromVector
import numpy as np
from Action import Stun, Kick
from NavUtils import getObstacleAvoidance, getTeamNearestAvoidance, getRestrictionField
class PreyBrain(object):
    '''
    classdocs
    '''


    def __init__(self):      
        pass
    
    def takeStep(self, myTeam=[], enemyTeam=[], balls=[], obstacles=[]):
        actions = []
        deltaPos = np.array([1, 0, 0])
        avoidMovement = getObstacleAvoidance(obstacles)
        avoidEnemyMovement = getTeamNearestAvoidance(enemyTeam)
        fenceAvoidMovement = getRestrictionField(obstacles[1], 200)
        movement = 1.5*avoidMovement + 2.0 * avoidEnemyMovement + 1.5*fenceAvoidMovement
        deltaRot = getYPRFromVector(movement)
        
        return deltaPos, deltaRot, actions

Similar to predator, we implement a weighted sum of the various rules and instruct the agent to rotate according to the movement.

If we use the above brains, with restricted agents with restriction range of 200. We get the following result.