Final Project

Objectives

Guidelines

Project Ideas

Videos

So that other students my comment on your approach, create a short video (~5 minutes?) describing the game you're developing an agent for and your approach (you are not expected to have completed the code for your approach yet). If you do not include an example of gameplay then provide a link to a source that describes the rules of your game. A narrated or captioned presentation exported from PowerPoint or similar presentation software will suffice.

Address the following questions as appropriate (some questions may not apply to your game/approach).

Groups should submit one video, which may be slightly longer than the guideline to give each group member time to address the questions for their particular approach.

In the last week of class, watch 3 (graduate students: 4) videos from the Final Project Videos folder in the Canvas Media Library or from the YouTube links provided on Canvas and write short responses to them. There are several students or groups working independently on some games, so you should choose presentations on different games (don't watch 3 videos about Hoot Owl Hoot). Indicate which presentations you watched and then write your responses. Write at least 5 (graduate students: 7) total responses across the videos you watched (so 1-3 each). Each response should be a sentence or two giving your thoughts on the project, for example:

Course staff will not be able to collect, collate, and distribute your responses, so if you want the project authors to see them, please enter them as comments on Canvas or YouTube (optional).

Deliverables and Submissions

Grading

Your final project submission will be graded on the the soundness of your approach, including whether your chosen algorithm is appropriate for your chosen problem, and whether you have chosen an appropriate means of assessing the results.

To assess the performance of an agent, you will need some baseline agent to compare against. Consider comparison to a random agent only as a fallback when you can't easily implement a better baseline agent (such as greedy or rule-based), unless there is some reasonable expectation that a random agent performs reasonably well. For agents where you can vary the computational resources available (CPU time, for example), a comparison using different levels of resources can be useful (for example, alpha-beta to depth 4 vs. alpha-beta to depth 6). These comparisons are most useful when sampled over many different games, but deterministic agents will always return the same result from the same initial position. In such cases, you can introduce some randomness by adding noise to the heuristic function if you're using one, having the agents select a random move some small percentage of the time, or by starting with a randomly chosen (but still balanced) initial position.

You will not be penalized if your approach results in poor performance, as long as your approach was otherwise sound.