DEV Blog: Dog Walker – progress through iteration

Build something new. Break it. Then wipe the slate clean and try again. Eventually, your slate will bear a completed project, reinforced with all the failed iterations you washed away. And even afterwards, once the project is finished, you can reuse that knowledge. The project can be extended, or re-imagined into something new: a chance for further learning and development.

This blog post is a check-in/retrospective hybrid about the app I’m working on: Dog Walker. Dog Walker is the tentative name for a mobile app where you use real life activity to walk and race with virtual dogs. I want to talk about the ideas and projects that preceded this app, and how they’ve informed the current work on Dog Walker.

This idea has been cultivating for a while, thanks to a cyclical process of trying new things, throwing them away, and starting again. This helps me build skills and knowledge in new areas, while not tethering myself to initial mistakes within any one project.

I like to think of each iteration of the core idea as helping me learn new skills and try new mechanics, which I refined through restarting the project until it was complete. That refined knowledge allowed me to create the next idea, the next project, the next mechanic. With those came new skills, which I honed in the same iterative fashion.

When I first started working on the initial iteration of Dog Walker, it was very different. I wanted to work on my development skills without having to worry about creating art. I’m not an artist, and at the time I didn’t want to spend any money on commissions. So, I transformed my idea to walking Pokemon instead of dogs. There were two reasons for this: I love Pokemon, and there is a ton of existing Pokemon art I could use for my ideas.

It’s also really motivating to work on your projects/ideas using existing franchises you already like. The idea of hatching Pokemon eggs by walking in real life (akin to the main-line games)? Who wouldn’t want to learn how to develop that?!

And so I did. My first app, Pokemon Day Care, taught me iPhone development and used the initial mechanic of using real life steps as in-game experience. The extension of that idea into a web app taught me a lot about C# and Web development. After that, transforming the idea into two apps for the Fitbit let me dabble in JavaScript development. Finally, I moved to creating my own IP, where the above ideas are refined in a mobile app, letting me practice my Unity development skills.

Pokemon Day Care iPhone App

Various screenshots of the Pokemon Day Care iPhone app

Skills Learned

  • iOS Development
  • Model-View-Controller design pattern
  • Swift coding language
  • Debugging
  • Playtesting
  • App deployment

Game Mechanics Introduced

  • Using real life steps to hatch, level, and evolve Pokemon
  • Collect new eggs once a day by logging in
  • Collect gym badges by collecting the teams of the Johto gym leaders
  • Challenge prolific trainers for the highest step count in the leaderboard

Pokemon Day Care is an app I made for the iPhone 6S. It used your daily steps to hatch, level, and evolve Pokemon you collected from Professor Oak. It is similar to Pokemon Go’s egg collection mechanic, but there are some key differences. This app doesn’t need an internet connection to be played. You also don’t need the app to be running (or active in the background) for your steps to be used in the app. While I loved Pokemon Go (I walked 16 kilometres the day it came out), I didn’t like having to stop in my tracks to focus on the app. So, Pokemon Day Care syncs your steps once a day, using Apple’s HealthKit, to allow you to focus on your exercise without checking the app!

I worked on this project during university, while I was learning about app development. One of our assignments was to make an iPhone shopping app using the Model-View-Controller software design pattern. I saw how I could implement the same design pattern for my app, and after a few iterations I managed to complete it. I learnt a lot about iOS development and coding in swift, as well as debugging, testing, and deploying apps.

Pokemon Day Care Web App

Screenshots of the Pokemon Day Care web app

Skills Learned

  • HTML, CSS, Javascript, and jQuery
  • C#, LINQ, and SQL
  • Azure Web App Deployment
  • Fitbit API integration
  • Documentation maintained and created for version updates
  • Dedicated DEV and PROD environments for quality assurance, debugging, and testing
  • Use of the Software Development Lifecycle during the development, release, and maintenance of this project

Game Mechanics Introduced

  • Complete with other users for the most steps in leaderboards
  • Weekly and Monthly leaderboards

This iteration of Pokemon Day Care was an online web application that used a user’s Fitbit step data to hatch and level Pokemon. It’s pretty similar to the iPhone app, but the leaderboard was updated and let you see other users steps. The application was hosted by Windows Azure with information stored and maintained in an SQL database.

This project was initially very difficult. I restarted it multiple times out of frustration. Luckily, I was working as a .NET developer while developing this project. So while I was gaining knowledge at work, I was able to transfer it to my web application. I’ve come a long way since then, even becoming a Microsoft Certified Professional for Web Applications.

The Facebook page for the Pokemon Day Care web app

This was also probably the most traction any of these projects received. At it’s peak there were over 50 daily active users worldwide. Kinda bad numbers, but I didn’t advertise or promote it too heavily. However, this limited audience presented the opportunity to practice writing change-logs and dev blogs, where I described changes and updates to the site.

This project came to an end when I was trying to implement a race update, where you could race NPCs for the most steps. This was meant to motivate people (me) to run more. You’d be able to challenge a NPC and your Pokemon team would gain experience based on the results and your steps.

Unfortunately, I wasn’t able to get access to intra-day step data from Fitbit (it needs to be approved on a case-by-case basis, and this case failed), so I stopped working on the website and eventually took it down.

After a while I was determined to try this new mechanic again, but on a more accommodating medium. I wanted to update my Fitbit as well, since it was over 2 years old and a little worse for wear. That’s when I noticed I was able to develop apps for my new Fitbit Versa, and it would have access to intra-day data off the bat.

Pokemon Day Care Fitbit Apps

Screenshots of the Pokemon Day Care apps for the Fitbit Versa

Skills Learned

  • Greater understanding of JavaScript, CSS, and SVGs

Game Mechanics Introduced

  • Race against NPCs to see who can gain the most number of steps in 15 minutes

These next two apps introduced the least new amount of skills, but they were a great way to reiterate over the existing skills to hone the racing mechanic on a new platform.

PDC Battle was an application that lets you race AI opponents for the most steps within a set period of time, and PDC Hatch was an application that uses your daily steps to hatch and level Pokemon.

PDC Hatch was just yet another re-imagining of my previous works, but PDC Battle was the app that introduced the failed “racing” mechanic into my work. You were able to battle the Gym Leaders of Kanto, each one rising in difficulty, until you came face to face with the Elite Four. You could keep track of your badges, wins, and losses, all in the app! I had to tweak the difficulty of the game initially, as during my play-testing I couldn’t even beat the easiest Gym Leader! But some tweaking to the difficult settings fixed that right up. The difficulty scale was tweaked to help me increase my running speed, where the most difficult trainer would be my goal pace for a 5 kilometre run. Unfortunately, my Fitbit broke before I could reach that far, and now these two projects are stored on Github (PDC Battle here and PDC Hatch here).

PDC Hatch in action!

This project was fun to work on. I was working as a JavaScript developer while completing this project. I had also used JavaScript previously to get my web application to work, and Fitbit used a lightweight JavaScript engine for apps, so there was a lot of skill overlap. The apps could only be 10MB in size at installation time, so condensing the apps down was a real challenge. In a way, it helped shape the apps more than it did constraint it. The size limit is why the apps are separate, and it stripped away the unnecessary bells and whistles I would have been distracted in implementing. Nonetheless, I battled ferociously with the Fatal Jerryscript Error: ERR_OUT_OF_MEMORY message that would taunt me through development. Finally, through tons of refactoring and minifying, my apps were small enough to be played without any issues!

Dog Walker App in Unity

Now we’ve come to my current project! The final iteration of this idea to incorporate real world fitness into a game comes in the form of my Dog Walker app. This takes inspiration from all the previous ideas, culminating in an app where you can walk dogs and race alongside them against your (in game) neighbours.

The biggest difference here is changing the theme from Pokemon to dogs. This was done for two reasons: 1. to allow myself to publish this without any copyrighting issue, and 2. to let me dip a toe in contracting artists for work.

This iteration has proved the most difficult. I made the decision to create this idea in the Unity game engine. I love game development, and am now no stranger to C#, but I’ve only developed a small few games in the engine. You would have seen some blog posts on Egg Catch and Remember This! somewhere around here. I also have a lot of other larger ideas I want to implement using the same engine, so this project serves as an important conduit between developing small web based mini-games and developing projects on a larger scale.

I have restarted this Unity project probably around 5 times. Since my initial prototyping attempts to now. Thanks to my previous attempts, my current vision for the project is at its most clearest. I’m hoping this iteration of Dog Walker will be the final one.

My most recent abandonment on this project was utilising a web API I created, which would update an SQL back-end and return results. This relied too heavily on an internet connection (straying from my first app’s vision), so I canned the project. I’ve now restructured the development to avoid the need for an internet connection wherever possible.

Meet your neighbours from the Dog Walker app!

While I don’t have much to share from this project, I can share these adorable dogs and their owners! I commissioned an Adelaide artist to help come up with concepts for these characters, and I’m really happy with how they have turned out. I worked with an artist called Krystal Davies and you can find her website here. It’s been a lot of fun collaborating with other people in Adelaide. Now it’s back to the grindstone to ensure the code in this project is as good as the art!


That’s it for today! I hope this little project retrospective was an interesting read. I find it neat that my past projects reflected skills I was using for my work positions at the time. Kind of like a symbiotic relationship where I was developing my work skills, and advancing my hobby projects at the same time.

I don’t think I’ll have another blog post for a while: I need to shift my priorities to app development. But you can always find me on twitter, where I am very bad at remembering to post!

Until next time,
Adrian

Extending Brackey’s Dialogue System à la Ace Attorney

To put a positive spin on it: it’s never been a better time to learn things. The wealth of knowledge freely available to us is remarkable. Need to know something? Google it! More of a visual learner? There’s probably a YouTube video on it somewhere. Hate the internet? (then why are you here) find a book at the library!

Whichever way you learn, there’s a rich expanse of media just waiting to be tapped into. So, when I needed to know how dialogue systems are implemented in video games, I checked for some explanations on YouTube. Lucky for me, there is plenty of tutorials for my editor of choice, Unity.

Following Brackey’s Dialogue System Tutorial

Brackey’s has a fine tutorial here that I followed. I really liked this tutorial as it’s clear, simple, and easy to follow. By the end of the video, I had myself a rudimentary dialogue system with the following:

  • A Dialogue class to hold a speaker’s name and dialogue,
  • A dialogue UI to display the speaker name and their dialogue on the screen
  • A DialogueManager class to update the UI with the current dialogue information
  • An animation to show/hide the dialogue box when necessary
  • A co-routine delay so the dialogue displays one letter at a time
  • A button to trigger all of the dialogue system’s functionality
Brackey’s Dialogue System final result

For a 16 minute video, that’s pretty solid! Now I understand how to implement a dialogue system in Unity. I can even use my existing Unity knowledge to extend this dialogue system into something more complex.

For some context, my need to understand dialogue systems comes from needing to implement one for a little game I’m working on. I need to be able to talk to people, pick from different talk points to start conversations, and show them items from my inventory. This dialogue system is a great foundation, and I can extend it to fit my requirements.

The end result I’m looking for here is something akin to the visual novel adventure games of the Ace Attorney series (shown below).

An example of the Conversation UI in Phoenix Wright: Ace Attorney (DS)

Extending the Dialogue System

Before we begin updating the dialogue system there are two small changes I want to make. Firstly, we hold the sentences of dialogue in an array. Let’s switch that to a list. This will make editing the length of the dialogue easier, as lists do not have fixed sizes like arrays.

Next, I want to add some more flexibility to what is displayed on the UI. Currently, only one person can be talking in one dialogue. What if we want to have a conversation between two people?

To fix this, I’ve created a new DialogueUI class, and that is going to be what is stored in our list in our Dialogue class. Now the speaker is stored with the sentence, and an image of the speaker is added too.

public class DialogueUI {
    public string name;
    public string sentence;
    public Sprite image;

    public DialogueUI(Actor actor, string sentence, Emotion 
    emotion)
    {
        name = actor.FirstName;
        this.sentence = sentence;
        image = EmotionHelper.GetSpriteOfEmotion(emotion, actor);
    }
}

Don’t mind the EmotionHelper class, that just gets the right image for the emotion of the current person speaking.

We’re also going to need to update the dialogue UI to handle the speaker’s image.

The updated Dialogue UI

Now the dialogue on the screen can switch between different characters, and even different emotions (for example, the talker going from angry to shocked)!

Creating a Conversation UI for Characters

Now to the extension! The first thing I want to implement in this dialogue system is a “conversation UI”. What I mean by this is the hub of sorts for when you interact with a character. From the conversation UI you will be able to trigger dialogue with characters from their talk points, and present them with items to react to.

Designing the Conversation UI

Since the conversation UI differs from the dialogue UI, we’re going to need to make that as well. Luckily, it’s very similar to the updated dialogue UI.

This time around, the character’s name is above the box, and the box itself it split. Talk points go on the left, and the presenting items option go on the right (don’t mind the image of Link from Hyrule Warriors, it’s a placeholder).

The Conversation UI

Scripting the Conversation UI

A lot of the scripting for the conversation UI is going to mirror the dialogue UI that Brackey has done. Having the existing tutorial is really beneficial here as they can be used as a reference.

To hold the conversation data, we’re going to create a Conversation class. This Conversation class is going to hold information on what dialogues you can trigger with a character, as well as whether or not you’ve triggered that dialogue before.

public class Conversation 
{
    public Actor actor;
    public TextBox[] textboxes = new TextBox[4];
    public Emotion emotion;

    public void SetAsVisited(string text)
    {
        foreach (TextBox textBox in textboxes)
        {
            if (textBox != null && textBox.text == text)
            {
                textBox.visited = true;
            }
        }
    }
}

public class TextBox
{
    public string text;
    public bool visited;

    public TextBox(string text)
    {
        this.text = text;
        visited = false;
    }
}

For now, I’ve artificially capped the maximum conversations you can trigger for a character at one time as 4. I don’t think I’ll need any more than that, but I can always change it if needed.

Next, much like the DialogueManager, we’re going to need a ConversationManager that handles showing/hiding the Conversation UI and populating it’s contents.

public class ConversationManager : Singleton<ConversationManager>
{
    public Animator animator;
    public bool canvasOpen;
    public Image speaker;
    public TextMeshProUGUI speakerName;
    public Button[] talkItems;

    private PlayerMovement playerMovement;
    private DialogueInitializer dialogueInitializer;
    private Conversation lastConversation;
    private Camera mainCamera;

    void Awake()
    {
        mainCamera = Camera.main;
        dialogueInitializer = DialogueInitializer.Instance;
        playerMovement = FindObjectOfType<PlayerMovement>();
        canvasOpen = false;
    }

    void Update()
    {
        if (canvasOpen && Input.GetKeyDown(KeyCode.Backspace)) {
            EndConversation();
        }
    }

    public void StartConversation(Conversation conversation)
    {
        mainCamera.GetComponent<CinemachineBrain>().enabled = false;
        playerMovement.DisablePlayerMovement();
        animator.SetBool("IsOpen", true);
        speaker.sprite = EmotionHelper.GetSpriteOfEmotion(conversation.emotion, conversation.actor);
        speakerName.text = conversation.actor.FirstName;
        canvasOpen = true;

        for (int i = 0; i < talkItems.Length; i++)
        {
            if (conversation.textboxes[i] != null)
            {
                talkItems[i].gameObject.SetActive(true);
                talkItems[i].GetComponentInChildren<TextMeshProUGUI>().text = conversation.textboxes[i].text;
                if (conversation.textboxes[i].visited)
                {
                    talkItems[i].GetComponent<Image>().color = Color.grey;
                }
            }
            else
            {
                talkItems[i].gameObject.SetActive(false);
            }
        }

        lastConversation = conversation;
        //Cursor.lockState = CursorLockMode.None;
    }

    public void EndConversation()
    {
        animator.SetBool("IsOpen", false);
        canvasOpen = false;
        playerMovement.EnablePlayerMovement();
        mainCamera.GetComponent<CinemachineBrain>().enabled = true;
        //Cursor.lockState = CursorLockMode.Locked;
    }

    public void TriggerDialogue(TextMeshProUGUI text)
    {
        EndConversation();
        lastConversation.SetAsVisited(text.text);
        dialogueInitializer.TriggerDialogue(text.text, true);
    }

    public void ReturnToConversation()
    {
        StartConversation(lastConversation);
    }
}

There’s quite a bit to unpack there, but let’s try to step through it:

Awake and Update methods

Awake deals with setting some private variables. I’ve defined the DialogueInitializer (responsible for showing dialogue) as a singleton, so that explains the .Instance.

The update method listens for when to close the UI.

StartConversation method

The StartConversation method begins by disabling the camera and player movement, and bringing the UI into the screen.

It then updates the speaker name and image (from the Conversation parameter that was passed into it), and sets the canvasOpen variable to true.

It then loops through the talk points defined in the Conversation parameter, and sets them on the UI. The last thing it does is set the lastConversation variable to the Conversation parameter. This is to enable to Conversation to be reopened once a dialogue is triggered and completed.

EndConversation method

The EndConversation method reenables the camera and player movement, and removes the UI from the screen. Not too much here.

TriggerDialogue and ReturnToConversation methods

The TriggerDialogue function is triggered when a talk point button is pressed. It will hide the conversation UI, set the talk point as visited, and then trigger the dialogue.

The ReturnToConversation method brings the conversation UI back into view once a dialogue triggered from it has ended.

Implementing the Conversation UI

Brackey’s tutorial triggered the dialogue via a button. That is not going to work for my implementation, as the Conversation UI needs to change depending on which character it is triggered for.

For that, we need to have some scripts on our characters! So, let’s go into our scene view and look and my wonderfully modelled NPC character:

The Actor Selector script on a NPC game object

As you can see, I’m quite the 3D modeller. On the left, I’ve attached an ActorSelector class to the NPC. This just lets me choose the character (actor) from an enum I’ve defined. This enum holds all the characters in the game.

Now, to my wonderfully modelled player character:

The ConversationTrigger script on a child of the Player game object

Okay, I lied, I got that model online. I’ve attached a ConversationTrigger class to the player, and this is what enables the conversation UI to appear when we are close enough to the character we want to talk to.

public class ConversationTrigger : Singleton<ConversationTrigger>
{
    private bool startConversation = false;
    private ActorSelector conversationWith;

    private ConversationInitializer conversationInitializer;

    void Awake()
    {
        conversationInitializer = ConversationInitializer.Instance;
        conversationWith = null;
    }

    void Update()
    {
        if (startConversation && Input.GetKeyDown(KeyCode.Space)) {
            conversationInitializer.TriggerConversation(conversationWith.actor);
        }
    }

    void OnTriggerEnter(Collider other)
    {
        if (other.tag == "NPC")
        {
            startConversation = true;
            conversationWith = other.gameObject.GetComponent<ActorSelector>();
        }
    }

    void OnTriggerExit(Collider other)
    {
        if (other.tag == "NPC")
        {
            startConversation = false;
            conversationWith = null;
        }
    }
}

Complementing this script, the player has an trigger collider attached to a child object. This collider sticks out in front of them. This extra collision is what this script is checking for. It allows us to walk up to a character in the game and start a conversation, much how you would in real life (sans a picture of their face and name appearing in front of your eyes).

ConversationInitializer class

Okay, so now we have our conversation UI, a way to trigger it, and a way to control what it shows. So how does it know what to show? With our ConversationInitializer class, that’s how!

public class ConversationInitializer : Singleton<ConversationInitializer>
{
    private ConversationManager conversationManager;
    private DialogueInitializer dialogueInitializer;

    void Start()
    {
        conversationManager = ConversationManager.Instance;
        dialogueInitializer = DialogueInitializer.Instance;
        ConversationDatabase.InitializeConversations();
    }

    public void TriggerConversation(ActorList actor)
    {
        if (actor == ActorList.BLOCKING_GUARD)
        {
            conversationManager.StartConversation(ConversationDatabase.BLOCKING_GUARD);
        } else if (actor == ActorList.DETECTIVE)
        {
   dialogueInitializer.TriggerDialogue(DialogueKeys.DETECTIVE_CORONER_INTRO, false);
        }
        else if (actor == ActorList.CORONER)
        {
   conversationManager.StartConversation(ConversationDatabase.CORONER);
        }
    }
}

You would have seen the conversationInitializer.TriggerConversation(conversationWith.actor) line up above in the ConversationTrigger class and wonder where that goes to. The answer is here. This class is going to initialise all the possible conversations that can be triggered and show them based on the NPC the player is currently looking at. When it all boils down to it, it’s just a very long if else statement, but hey, they’re always hiding somewhere.

DialogueInitializer class

In the same vein, we have a DialogueInitializer class.

public class DialogueInitializer : Singleton<DialogueInitializer>
{
    private DialogueManager dialogueManager;
    public Dictionary<string, Dialogue[]> dialogues;
    
    void Start()
    {
        dialogueManager = DialogueManager.Instance;
        DialogueDatabase.InitializeDialogueDictionary();
    }

    public void TriggerDialogue(string key, bool returnToConversation)
    {
        Dialogue dialogue = DialogueDatabase.dialogues[key];

        dialogueManager.StartDialogue(dialogue, returnToConversation);
    }
}

This is going to initialise all the possible dialogues in the game into a dictionary. Then, when one needs to be displayed, the key is passed into the TriggerDialogue to find and display it.

There’s a neat trick here, wherein the text of the talk button on the conversation UI is the key for the dialogue within the dialogue dictionary. This prevents worry from wondering what to set as the talk button text.

Presenting Items to Characters

Now, how do we present an item? Luckily for us, it’s a similar notion to what we’ve done previously.

Updating the ConversationManager class

First things first, we’re going to need to add references to the UI components pertaining to presenting items.

public class ConversationManager : Singleton<ConversationManager>
{
...
    public Button[] talkItems;
    public Image inventoryItemImage;
    public TextMeshProUGUI inventoryItemName;
    public Button inventoryLeft;
    public Button inventoryRight;
...

    void Awake()
    {
        mainCamera = Camera.main;
        dialogueInitializer = DialogueInitializer.Instance;
        playerMovement = FindObjectOfType<PlayerMovement>();
        canvasOpen = false;

        inventoryIndex = 0;
        UpdateInventoryItem();
        UpdateInventoryButtons();
    }

...

    public void StartConversation(Conversation conversation)
    {
        mainCamera.GetComponent<CinemachineBrain>().enabled = false;
        playerMovement.DisablePlayerMovement();
        animator.SetBool("IsOpen", true);
        speaker.sprite = EmotionHelper.GetSpriteOfEmotion(conversation.emotion, conversation.actor);
        speakerName.text = conversation.actor.FirstName;
        UpdateInventoryButtons();
        canvasOpen = true;

        for (int i = 0; i < talkItems.Length; i++)
        {
            if (conversation.textboxes[i] != null)
            {
                talkItems[i].gameObject.SetActive(true);
                talkItems[i].GetComponentInChildren<TextMeshProUGUI>().text = conversation.textboxes[i].text;
                if (conversation.textboxes[i].visited)
                {
                    talkItems[i].GetComponent<Image>().color = Color.grey;
                }
            }
            else
            {
                talkItems[i].gameObject.SetActive(false);
            }
        }

        lastConversation = conversation;
        //Cursor.lockState = CursorLockMode.None;
    }

...

    public void MoveInventoryLeft()
    {
        if (inventoryIndex == 0)
        {
            inventoryIndex = InventoryManager.Instance.items.Count - 1;
        } else
        {
            inventoryIndex--;
        }

        UpdateInventoryItem();
    }

    public void MoveInventoryRight()
    {
        if (inventoryIndex == InventoryManager.Instance.items.Count - 1)
        {
            inventoryIndex = 0;
        }
        else
        {
            inventoryIndex++;
        }

        UpdateInventoryItem();
    }

    public void UpdateInventoryItem()
    {
        inventoryItemImage.sprite = InventoryManager.Instance.items[inventoryIndex].image;
        inventoryItemName.text = InventoryManager.Instance.items[inventoryIndex].name;
    }

    public void UpdateInventoryButtons()
    {
        if (InventoryManager.Instance.items.Count < 2)
        {
            inventoryLeft.interactable = false;
            inventoryRight.interactable = false;
        } else
        {
            inventoryLeft.interactable = true;
            inventoryRight.interactable = true;
        }
    }

    public void PresentItem()
    {
        Item itemToPresent = InventoryManager.Instance.items[inventoryIndex];

        EndConversation();
        dialogueInitializer.TriggerDialogue(lastConversation.actor, itemToPresent);
    }
}

Some small additions here and there (I wish WordPress would let me add formatting to the code snippets…).

We’ve added some variables to handle the item presenting side of things. Since this is presenting items, they are being accessed from our inventory. You’ll see some references to that in the script.

In the Awake() function, we’ve added some simple initialization. We set the index of our Inventory list and then populate the UI data with the item at that index. The code will disable item switching if only 1 item is in the inventory (there will never be less than 1 in my game).

The last new function, PresentItem(), takes the current item on the UI, and the current person we are having a conversation with, and sends it through to the DialogueInitializer class.

Updating the DialogueInitializer class

Now we’ve got a second way we want to trigger dialogue: by presenting an item. To account for this, I’ve overloaded the TriggerDialogue function in the DialogueInitializer class with a function that takes the item being presented and the person being presented with it.

    public void TriggerDialogue(Actor actor, Item item)
    {
        if (item == ItemDatabase.GetItem(ItemId.ID))
        {
            if (actor == ActorDatabase.BLOCKING_GUARD)
            {
                    TriggerDialogue(DialogueKeys.GUARD_SHOWING_ID, true);
            }
        } 
    }

In this function, it’s just a matter of matching the item and person to the dialogue needed. No matter how deep I try to bury my massive if-else blocks, they always come to light sooner or later.

My extension of the Brackey’s dialogue system

Retrospective

Using information available online to gain, and then extend upon knowledge is a great way to learn. Understanding how to search for and implement a solution to a problem is a fundamental learning method for a programmer. A simple online tutorial has now given me the wherewithal to implement dialogue systems of varying complexity in all of my projects.

However, as I type this up, I’ve noted a few things that can be worked upon to further improve/extend this dialogue system.

Implementing choices

At the moment, there isn’t the capability to control branching dialogue. For example, when the player needs to make a choice. Choices are commonplace in adventure games that have visual novel elements, so this is something that is going to need to be implemented eventually.

Confusing class names

The class names are somewhat confusing. Conversation and Dialogue as words have very similar meanings, but I couldn’t think of a more explanative naming convention (naming is slightly my failing). To combat this, I made a readme file for my GitHub repository, which explains these things in case I need a refresher. Not ideal, but still workable.

Changing presenting items display

At the moment the conversation UI displays a single item. It looks fine since the inventory is tiny, but if the inventory was expanded to include a larger list, it would be cumbersome to have to cycle through it every time you needed to find a specific item to present. In the future I’ll look at updating the presenting items UI.

Dictionary Key Constraint

Using the dialogue dictionary’s key for the talk button text means that I won’t be able to use the same talk button text for two different dialogues. That sucks if I want to have a generic “Need any help?” button for multiple characters, but I can get around this by adding identifiers to the keys and formatting them for the talk button text.


With that, I think I’ll switch some learning new things to tackling the mountain of games in my “to play” pile! It’s the perfect time, because I’m going back into my code and finding some pretty heinous mistakes. For a good portion of a week I forgot what constructors were and started using static methods for them instead. Boy is my face red. If anyone needs me, I’ll be finishing Act III of We Happy Few 😊 .

I hope you enjoyed this little write-up! If you have any feedback on how to improve this dialogue system, please let me know!

Feel free to comment below, or contact me on twitter.

Until next time,
Adrian

Recreating the Clefairy Says mini-game in Unity

As I try to complete more projects, I advance my skills as a programmer (at least I think so). That is great. But then I look back on my past projects with a little remorse: I could have designed them much better now than I did back then. There are some things in my past work that make present me blush. But I think that’s a good thing. Otherwise I’d be stagnating in my professional development.

I mention this because after my latest project, I did a lot of studying on clean code. The benefits of it, how to write it, how to spot it, how to avoid it, and how to fix it. I watched a variety of online courses, and read this big beautiful book on Clean Code.

Then I went back to review my game. It didn’t look so good.

I debated refactoring everything, but a few things stopped me: the game worked and was released, so this refactoring would only make things look better under the hood. Not to mention, I’ve moved onto other projects, which I don’t want to delay working on to fix something that isn’t broken. I’d much rather work on getting new projects aligned to my current standards, then to constantly cycle over old projects.

With that in mind, I hope that this project can be the foundation for cleaner code in the future, both for me and for you! If you can spot some things that could have been done better, kudos! (And feel free to tell me what they are – I’d love to know!)

This project replicates one of my favourite games to play growing up: the Clefairy Says mini-game, from Pokemon Stadium on the Nintendo 64. I wanted to replicate it because I wanted to familiarize myself with the Unity Timeline, something I have never used in a project before.

Learning a new skill while recreating a childhood favourite is a surprisingly fun way to do it.

In Clefairy Says, you have to repeat a series of arrow patterns. The arrows are written on a chalkboard, then removed, and you have a short amount of time to recount them. Get too many wrong and you’re out. With each wave of arrow patterns, more arrows are added until the chalkboard is full.

Since the arrow patterns come out in waves, I thought it would be a good idea to make that functionality with timelines. To be honest, that thought faded once I implemented them. I had to create entirely new timelines for each wave, where I thought I could reuse one and just tweak some sort of parameter. Unless I’m very silly and missed some way to do that, I would have much better off coding the waves with co-routines. Regardless, here is my foray into Timelines, and how I used them to recreate Clefairy Says!

Game Scene Set Up

First, I’ll go over my canvas set up. Since the game play revolves around repeating patterns, there really isn’t much that exists outside of them.

I’ve got some text components that make up the headers, as well as some other game objects that contain components for the lives, and the timer for player input The big ticket here is the arrow game objects themselves.

The arrow game objects are made up of an Image component and an Arrow script. The arrow script has a public variable for the KeyCode of the directional arrow, and a function to initialize that along with a corresponding image (to show the directional button to press).

using System;
using UnityEngine;
using UnityEngine.UI;

[RequireComponent(typeof(Image))]
public class Arrow : MonoBehaviour
{
    public KeyCode arrowKey;

    public void SetArrowKeyAndUpdateSprite(Tuple<KeyCode, Sprite> arrow)
    {
        arrowKey = arrow.Item1;
        GetComponent<Image>().sprite = arrow.Item2;
    }
}

For neatness, I’ve parented all the arrows to an empty GameObject, and spread them evenly within the canvas. Once I add in the art (created by the talented baeflin), the cute classroom setting comes to life!

A screenshot of the game in the Unity inspector, with the hierarchy of gameobjects on the left.
A screenshot of the game in the Unity inspector, with the hierarchy of gameobjects on the left.

Creating the arrow waves with timelines

So, for every arrow wave we’re going to need two types of timelines: one to show the arrows, and one to show the player’s guesses of the pattern. There’s also an intro timeline for each wave, but since that one doesn’t change from wave to wave, it only needs to be created once. So that’s 11 timelines in total.

For the timelines to show the arrows to press, all I did was create a Timeline and assign it to a Playable Director component on a new game object. After that, I could use the Timeline window to hide and show arrows as I needed to. Once I had all the arrows for the wave visible, I added a Signal Receiver component to the gameobject, and added a Signal Emitter to the end of the timeline. This emitter triggered code in my GameManager class to track the player input.

See how the arrows are visible as the timeline moves? (It's a little sped up)
See how the arrows are visible as the timeline moves? (It’s a little sped up)

I had a similar setup for the timelines that were triggered once the player had guessed the arrows. Each corresponding arrow wave had another timeline, which triggered signal emitters to trigger the movement animation based on what button the player had pressed (Once again, this logic is stored on the GameManager class).

Tying it together

This is where I get a little embarrassed. Most of the logic for this game is encapsulated into one script, as I didn’t refactor anything out into other scripts. It’s a little messy.

Anyway, the timelines are all created and doing their thing. The signal emitters at the end of showing arrow timelines trigger the player inputs, and the timer at the end of the player inputs triggers the end wave timeline. And the signal emitters at the end of the end wave timelines trigger the next wave (or the end screen, if the player beats all waves with any health left).

And when the scene starts, the intro timeline is set to play on awake, so the game begins as soon as the scene starts.

The Intro Timeline's Playable Director component
The Intro Timeline’s Playable Director component

The only thing we haven’t done yet is assign the arrows their values. They need to be assigned at random at the beginning of the game. Also, they don’t change between waves – all the arrows are set, and then the amount shown is increased each wave. Here’s the logic that does that in the GameManager script:

public Arrow[] arrows;

void Start()
{
    SetArrowsToPress();
}

public void SetArrowsToPress()
{
    Tuple<KeyCode, Sprite>[] keys =
    {
        Tuple.Create(KeyCode.UpArrow, upArrowSprite),
        Tuple.Create(KeyCode.DownArrow, downArrowSprite),
        Tuple.Create(KeyCode.LeftArrow, leftArrowSprite),
        Tuple.Create(KeyCode.RightArrow, rightArrowSprite)
    };

    foreach (Arrow arrow in arrows)
    {
        arrow.SetArrowKeyAndUpdateSprite(keys[Random.Range(0, keys.Length)]);
    }
}

I’ve exposed the arrows as a public variable so I can drag them in via the inspector. From there, the SetArrowsToPress function creates the four possible buttons to press. It does this as tuples so the keycode, as well as the image to display on the arrow can be passed as one variable.

Then we just loop through our arrows, and call the function we defined on our Arrow script way up above.

Now, there’s a whole lot of other mess under the hood, mostly relating to animations and receiving the player input, but since this was about using Unity timelines, I’ll end it here.

If you’re so inclined to see the spit and polish, you can dig deeper into the code at the GitHub link here.


I hope you enjoyed this little write-up! If you have any feedback on the game, or any tips on development or this write-up, please let me know!

Feel free to comment below, or contact me on twitter.

Until next time,
Adrian

DEV Blog: Spirit Cleanser Enemies

A four-day long weekend coupled with a state-wide lockdown has done wonders for my productivity. Positivity is always welcome during times like these, which is why I’m thankful this streak of solitude granted me to the time to revisit a prototype I had shelved.

There’s a bit of irony on why I had avoided it for so long: I was frustrated with how the player handled being hit by a large enemy. It was only a rectangle box, but when it attacked from above the player just squished between it and the floor, instead of recoiling away. Should I have just fixed it? Perhaps. Did I instead spend a significant amount of time learning a visual scripting plugin for Unity, in hopes it would prevent future issues like above? Perhaps.

After all that I realized I’d rather write code than drag nodes, and after only 30 minutes of concentrated debugging, this insignificant bug was squashed. I wasted quite a bit of time on my time saving scheme. Nevertheless, glass-half-full as I am, this means that my base enemy logic is complete. So, I thought I would share it with you. Now I have a nice little prefab, consistent with every enemy, that can be built on to ensure unique yet unified foes.

That is, you can possess all enemies, and you can cleanse all enemies. This process is how every enemy in the game will be defeated.

Enemy Possession and Cleansing

Enemy possession and enemy cleansing are two parts of one whole. Together, they form the foundation of the whole game.

When the player projects their spirit, they can “spirit dash”. If they collide with an enemy during this dash, they will possess them. The body remains as a vulnerable shell until the spirit returns.

Possessing and enemy will make the player’s spirit disappear into the enemy. From there, a series of button presses will appear above the enemy. If they are pressed in the correct order, the enemy is cleansed and the player’s spirit is ejected back into the world.

I’ve represented these two pieces of logic with flowcharts. Flowcharts might not be the most appropriate for this, but I’m trying to re-familiarize myself with them.

Logic Flow from Spirit projecting to Enemy possession
Logic flow for Enemy cleansing

Making the Enemy Prefab

Every enemy needs to be possessable and cleansable. It stands to reason a base prefab is a great way to quickly and easily set the enemy logic, before layering on defining enemy characteristics. Straight from the Unity manual:

Unity’s Prefab system allows you to create, configure, and store a GameObject complete with all its components, property values, and child GameObjects as a reusable Asset. The Prefab Asset acts as a template from which you can create new Prefab instances in the Scene.

Nice.

This prefab is going to need two things: scripts to handle the logic, and a canvas to display the UI elements.

The Enemy Canvas

The first thing I did for the canvas was to create prefabs for the buttons that need to be pressed for cleansing. These consist of an Image component, and a script that defined the sprite to show, and the corresponding button to press. All the script does is set the image, and tell another script about the button to press. There are four prefabs, one for each direction on the keypad.

    
public class CleanseButton : MonoBehaviour 
{   
    public Sprite buttonImage;
    public string buttonToPress;

    void Start()
    {
        GetComponent<Image>().sprite = buttonImage;
    }
}
Cleanse Button game object values

I chose this approach for cleanse buttons to allow for easy button adding. If more difficult enemies demand more buttons on the keypad, then all I’ll need to do is add some more prefabs in this fashion. These prefabs will be passed as parameters to the enemy logic script, dictating possible buttons to press.

Our canvas is going to need a place to store these buttons, and to hold the timer. The hierarchy looks like so:

Enemy Canvas heirachy

You’ll notice the Grid is empty. The object has a grid layout group on it, and the buttons will be populated as children at run time. This is to ensure randomness. These objects on the canvas are also disabled to start: they are only visible during possession. The slider has a script on it to start timing when it’s enabled (it also has a variable for timer duration).

public class EnemyDamageTimer : MonoBehaviour
{
    public Slider slider;
    public float maxTime = 5f;
    private float timeRemaining;
    public EnemyDamage enemyDamage;

    private void OnEnable()
    {
        timeRemaining = maxTime;
    }

    void Update()
    {
        slider.value = CalculateSliderValues(); 

        if (timeRemaining <= 0)
        {
            enemyDamage.EjectFromEnemy();
        } else
        {
            timeRemaining -= Time.deltaTime;
        }
    }

    private float CalculateSliderValues()
    {
        return timeRemaining / maxTime;
    }
}

This script must be placed directly onto the slider object. This means the timer logic is visually separate from the enemy logic (you need to go into the prefab’s children to find it). It irks me slightly, but it works.

The Enemy Script

Now, the parent object of the canvas will hold the big script. This script dictates which buttons can appear, and how many. On start, the script will set the buttons (which will still be hidden), and wait for the enemy to be possessed. Once an enemy is possessed, the canvas will be enabled (which will also trigger the slider timer), and the script will check for button presses. Correct button presses remove that button from the grid. Once all the buttons are gone, then we kill the enemy and jump out of them. Else, the timer runs out and we jump out anyway.

Enemy Damage script editable variables

There are a few variables to tune here. I’ve listed them below with a short description for each one:

  • Mana On Cleanse: The energy returned to the player upon successfully cleansing the enemy
  • Enemy Grid: A reference to the game object holder the grid
  • Slider: A reference to the game object containing the slider
  • Number of Cleanse Buttons: The number of buttons that need to be pressed to cleanse the enemy
  • Possible Buttons To Press: An array of Cleanse Buttons that will make up the cleanse buttons
  • Punch X Max: Punch VFX on correct button press – maximum X offset
  • Punch Y Max: Punch VFX on correct button press – maximum Y offset
  • Punch Duration: Punch VFX on correct button press – length of effect
  • Possessed: A boolean that shows if the player is currently possessing the enemy

Most important are the possible buttons to press, and the number of cleanse buttons. These allow us to tune the enemy difficulty by increasing/decreasing which buttons to press, and how many times.

The Final Result

I can tell already this is going to make creating new enemies a lot easier!
Here’s a basic enemy with four button presses needed in two seconds.

Four Button Cleanse

And here’s another one with fifteen button presses needed in five seconds.

Fifteen Button Cleanse

Having this base so easily customizable means enemies can be tweaked to allow for specific gameplay action. What about small enemies that only need one button press? Line them in a row and let the player quickly obliterate them all, jumping from one to the other. Maybe an enemy with a short timer and long button presses? You’ll have to be smart and possess them multiple times to finish them off.

There are a lot of possibilities, and I’m excited to see what I come up with!

Possible Changes

Like every project ever revisited, this enemy logic is screaming to be refactored.

I’d prefer the grid and slider variables in the enemy script to be found via code instead of the drag and drop method. The variables are already set in the prefab, so it’s set in every prefab instance, but removing the clutter from the Unity inspector would be nice. There is a slight possibility it could cause issues (how would the script differentiate between multiple sliders/grids if an enemy archetype needed many?), but it is on the back of my mind.

The punch values being tweakable from the inspector seem unnecessary. For now it’s okay since we can tweak it easily if necessary, but if it should be consistent between ALL enemies, then privatizing the variables and making them constants could declutter that script in the inspector.

The last change that needs to happen is a major refactor of the script design. I think it would be cleaner if there was an IPossessable interface, which an abstract Enemy class that implements it. Then, each enemy archetype would inherit from the Enemy base class, and everything would be neat and tidy. I’m currently reading a book called “Clean Code: A Handbook of Agile Software Craftsmanship” – so I’m hoping that will motivate and guide me into tidying the enemy up even further.

Anyway, that’s it for now. Until next time!

Creating an Item Catch mini-game in Unity

I’ve been wanting to practice my game making skills recently, and with the Christmas break, I finally had a chance to scratch that itch!

I wanted to craft something small, like a mini-game, as it would be easy to complete in a short amount of time. I was inspired by my love of the Pokemon Stadium mini-games. One that I love in particular is Chansey’s Egg Emergency, where you control the pink blob to ensure it doesn’t drop any precious eggs.

It was only fair that I show my appreciation for the adorable little guy by recreating the mini-game that cemented her as a front-runner for my favourite Pokemon.

Now, I didn’t go for a completely faithful recreation: I attempted to emulate the item dropping and item catching parts of the game, and reskinned it for release.

Chansey is able to collect eggs by tilting left, tilting right, or staying in the middle of her lane. Eggs and Voltorbs (bombs) drop from three places on the top of the screen, and the player must maneuver Chansey so she collects eggs and avoids Voltorbs. So, let’s see what we can do!

Creating droppable items with polymorphism and inheritance

To start I created an empty 2D project in Unity.

The droppable items fall into two categories: eggs and bombs. They both need a sprite, a sound when they are picked up, and need to do something when they collide with the player.

For this reason, I decided to have an abstract class called Item that derives from the ScriptableObject class, and then have Egg and Bomb classes that derive from the Item class.

using UnityEngine;

public abstract class Item : ScriptableObject
{
    public Sprite image;
    public AudioClip collectSound;
    public abstract void Interact(GameObject obj);
}
[CreateAssetMenu(fileName = "Egg", menuName = "Egg")]
public class Egg : Item
{
    public int points;

    public override void Interact(GameObject obj)
    {
        PlayerLogic playerLogic = obj.GetComponent<PlayerLogic>();
        if (playerLogic == null)
            throw new MissingComponentException("Missing PlayerLogic component on " + obj.name);

        playerLogic.AddPoints(points);
    }
}
[CreateAssetMenu(fileName = "Bomb", menuName = "Bomb")]
public class Bomb : Item
{
    public override void Interact(GameObject obj)
    {
        PlayerLogic playerLogic = obj.GetComponent<PlayerLogic>();
        if (playerLogic == null)
            throw new MissingComponentException("Missing PlayerLogic component on " + obj.name);

        playerLogic.TakeLife();
    }
}

(The Interact functions contain some logic from the player, I’ll explain that down below).

Now that we have scriptable objects, we can create the objects, place them in our project folder, and create another script to hold our scriptable objects as variables.

The Scriptable Objects placed neatly in their folder.
public class DroppableItem : MonoBehaviour
{
    public Item item;
    public SpriteRenderer sprite;
    public float dropSpeed = 5f;

    private AudioSource audioSource;

    void Update()
    {
        transform.position -= transform.up * Time.deltaTime * dropSpeed;
    }

    void OnTriggerEnter2D(Collider2D collision)
    {
        if (collision.tag == "Player")
        {
            audioSource.PlayOneShot(item.collectSound);
            item.Interact(collision.gameObject);
            Destroy(gameObject);
        }
    }
}

Pay attention to the public Item item variable, and the item.Interact(collision.gameObject) lines. The declaration of type Item takes in both Eggs and Bombs, as they derive from the Item class. And again, calling the item.Interact function will either give the player points, or takeaway lives, depending on the scriptable object passed in.

This is great because the ability to add more droppable objects can reuse the above script. All that is required is for the new scriptable object to derive from the Item class, and to override the Interact function, and it can be dropped into the script without any other changes.

Creating the Item Dropper

I created an empty game object and placed three empty game objects as children within that object. Those children were spaced out and represent the drop locations for the items to come.

A script was attached to the parent object, and this was called ItemDropper. It holds parameters for the items to drop, the drop locations, as well as all the parameters that change the drop settings.

Now, this file is a little big, but you can find the entire thing on the github repository here. Through the iterations I had quite an ugly looking piece of script. To some experienced coders, it’s probably still ugly now.

Anyways, it’s fairly simple in its workings: on start of the scene, this script with loop forever with an IEnumerator DropItems(), which will drop items. Every loop it will decide:

  • The items dropped
  • The amount of items dropped
  • The time between item drops

And those rates also change with each drop, based on the amount of loops already executed. This way the difficulty increases the longer that game lasts.

You probably see something really gross in that new refactored image. I’ll explain it at the end of this post (don’t judge).

There is also an Item Destroyer that destroys items when collided with. This is placed below the screen of the player, and ensures missed items aren’t left in the scene.

Now, let’s move onto the player.

Player

Player Movement

In the original Chansey’s Egg Emergency, chansey only tilted left and right. In our interpretation it will be very similar. However, instead of titling left and right, our character moves to the left and right parts of the screen.

Instead of using velocity to move the player, all we are going to do is move the player underneath the drop locations for items. We do this by matching the x position of the objects. The only stipulation (for later when we do collision) is to ensure the locations are far enough apart to ensure that the player doesn’t collide with the items dropped in adjacent lanes.

If the player has no lives, they won’t be able to move. Otherwise, if they press left, they go under the left drop location. If they press right, they go under the right drop location. If they press none (or both at the same time), they stay in the middle.

    void Update()
    {
        if (playerLogic.lives <= 0)
            return;

        if (Input.GetKey(KeyCode.LeftArrow) && Input.GetKey(KeyCode.RightArrow))
        {
            transform.position = new Vector2(dropLocationMiddlePos.x, transform.position.y);
        } else if (Input.GetKey(KeyCode.LeftArrow)) {
            transform.position = new Vector2(dropLocationLeftPos.x, transform.position.y);
        } else if (Input.GetKey(KeyCode.RightArrow))
        {
            transform.position = new Vector2(dropLocationRightPos.x, transform.position.y);
        } else
        {
            transform.position = new Vector2(dropLocationMiddlePos.x, transform.position.y);
        }
    }

Ah, the illusion of movement. Gotta love it.

Player collision

Thanks to our work with the Scriptable Objects and DroppableItem class, implementing the player collision is actually quite simple. The most important thing is to ensure all our objects (items and the player) have the required collisions.

Since our DroppableItem class calls the Interact function, and the Egg and Bomb classes both override it with calls to functions from the PlayerLogic, all we have to do is ensure those functions exist within a PlayerLogic class, and attach it to the player.

These are the functions called by the droppable items. The PlayerLogic class has some other bells and whistles too, but you can find them on GitHub.

    public void AddPoints(int pointsToAdd)
    {
        animator.SetTrigger("CollectedEgg");
        points += pointsToAdd;
        UpdatePointsText();
    }

    public void TakeLife()
    {
        animator.SetTrigger("CollectedBomb");
        lives--;
        UpdateLivesText();
        if (lives <= 0)
        {
            Death();
        }
    }

I think that’s pretty much all the necessary. And if I haven’t mentioned it before, you can find the entire project on Github.

Polish

So, here’s what we have after all that:

It looks okay, but it doesn’t really pop. So, lets get some art made! baelfin of https://www.baelfin.com/ created some art assets to really make the game pop. After that, adding some audio, and some particle effects, and here is the final result:

That’s much better! If you’re on a PC and want to test it out, you can do so here:

Egg Catch by impojr

Postmortem

This probably definitely is not a proper postmortem, but it’s always good to reflect on what went well and what didn’t!

What went well

Polymorhpism and abstraction impementation: I’m happy with how the droppable items were coded. This was my first time actually using these practices outside of theory, and is really makes for clean, robust code. I’m not sure if this is the first time I have used them because this is the first time I’ve made something that could benefit from them, but going forward polymorphism will be great to implement wherever required.

Using Scriptable Objects: This is also the first time I’ve used Scriptable Objects properly within Unity. I have followed tutorials on them before, but this is the first time where I have actually seen a use for them in one of my projects, and implemented them into scripts.

Source Control: This one is a little silly, since it’s just me on the project. But I made sure to upload to GitHub regularly while working on this little mini-game. The goal was to try and upload every new feature singularly, but I think a few were bundled together. I did have to roll back some stuff once or twice, and that was very easy thanks to the source control. Regardless, there’s now a snapshot of each iteration of development for me to go back to if needed.

What didn’t go well

ref parameter in the ItemDropper class: You may have picked up on this in the refactoring image above, but I passed a ref parameter into the RandomlyDropBombFromPercentageAndUpdateDropChance function, to refactor it. Using a ref parameter is not ideal. I think the best resolution would be to restructure the function entirely to remove the need for a ref function. However, this is a very small project: there’s no chance another developer is going to call this function, pass in a variable and have it unexpectedly change on the return. So I think it’s not too bad in this case.

Making the game mobile friendly: When you look at the finished product, it looks like it should work on mobile. The dimensions are that of a smartphone, yet if you navigate to the itch.io link for the game on a mobile, you will not be able to play the game. If I were to continue development on this project, my first upgrade would be to add controls to allow for touch, and allow for mobile use.

Taking more pictures and videos during development: It sure would be a lot easier to show the development progress of the game if I had taken more images and videos during development. Oh well, next time.


I hope you enjoyed this little write-up! If you have any feedback on the game, or any tips on development, or this write-up, please let me know!

Feel free to comment below, or contact me on LinkedIn.

Until next time,
Adrian

DEV Blog: Fast Prototyping courtesy of the Unity Asset Store

a wheel
The humble wheel

Ah, the humble wheel. Look at it, in all it’s wheel-y glory. It’s quite a thing of beauty. Someone a very long time ago made it, in a flawless fashion, and it hasn’t been reinvented since.

Speaking of, I have another prototype I’m developing. It’s quite straightforward in it’s nature: download it on your phone, and exercise to walk a virtual pet. Simple. For now, I’ve named it Dog Walker. So you’ll understand that I have made absolutely nothing special for this, and relied almost exclusively on existing unity assets and packages to make a prototype as quickly and painlessly as possible.

I’ve done this for a multitude of reasons, but it resulted from thinking about what I wanted to create, what I needed to do to get there.

What do I want to create?

As stated above, a little phone app that motivates you to exercise via a virtual pet is the goal. That’s not too revolutionary. I’ve done something of the sort before, with Pokemon and various other mediums (see my Fitbit, iPhone, and web apps). This time I wanted to familiarise myself with Unity, and distance myself from piggybacking on my love of catchable pocket monsters.

I’ve thought about the features and created a prioritisation chart à la the MoSCoW method. The MoSCoW has helped me sort out what this app has to do in order to function, and other things that would just enhance that experience. Ultimately, anything outside the essential can be worked on later.

Dog Walker must

  • Be deploy-able to both iOS and Andriod mobile devices
  • Use the device’s built-in functions to access a user’s steps
  • Use the steps to walk virtual pets
  • Be able to be played without an internet connection
  • Be able to track a walk even with the app off

Dog Walker could

  • Have multiple dogs for the user to unlock and walk
  • Have a levelling system to see how much a dog has been walked
  • Have an energy system to stop the dog from being constantly walked
  • Have an affection system to show the dogs friendship level with the user

Dog Walker should

  • Be able to gather steps from other user devices (for example fitbit devices or garmin devices)

Dog Walker won’t (at this time)

  • Have any online capabilities
  • Allow step races – where the user competes with either AI or other users for the most steps within a certain amount of time

What do I need to achieve this?

Now, looking at what I’ve defined above, this doesn’t seem like a difficult thing to implement (though I may be getting ahead of myself). What I mean is that there isn’t a need for collision detection, no perfectly timed button inputs, and no player/enemy/boss logic. A lot of normal game logic doesn’t apply here.

All that has to be done, is to take a users steps, and then do something with those steps. Yes, it’s going to need to work on multiple devices. Yes, it’s going to have to persist data even when it’s off, but at the core it’s just about the steps.

So, to build a functioning prototype that actually gets a users steps is probably going to be the hardest part. It’s going to require some code native to deployment devices (to actually return real values). For now, I can create a prototype that works in every other aspect, and then just plug in the relevant code when the time comes. To do this, I’ve relied on two separate items from the asset store: DoozyUI and EasySave.

Doozy UI
DoozyUI for Unity

Easy Save
Easy Save for Unity

DoozyUI is a UI management system, which helped with transitioning to and from each Canvas with relative ease.

EasySave allowed for easy serialization of object classes to allow for the necessary data persistence.

Overall both were fairly easy to use, and with the Black Friday sales on the marketplace, it was a no brainer. I probably could have spent some time researching and implementing my own UI and save management systems, but I’m sure these assets do it better.

It’s a question I always revisit when trying to develop: Is it worthwhile investing in an existing asset for what I am trying to achieve? In this case, the existing assets won over.

Combining these two assets resulted in a pretty rapid prototype. I did have to mess around with a few things, but since the saving and canvas transitioning is pretty basic (thanks in no small party to EasySave and DoozyUI), there was minimal fuss.

Selecting a dog in Dog Walker

Walking a dog in Dog Walker

The above gifs show two particular workflows from the app, with little Pokemon dogs in lieu of actual assets.

The first one shows the how to select your “main dog” (i.e. the one to walk). From the main screen, the little dog icon will take you to the dog selection screen. From there, you can swipe across dogs and select one to walk. This will update the main screen with that dog, which you can return to via the back button.

The second one is the walk workflow. The current step calculation is just a constant number, but this shows the walk confirmation, walk in progress, and walk complete screens. If you were to shut down the app on the walk in progress screen, loading it back up with send you back there (with time labels updating correctly). It also incorporates the energy, level, and friendship elements from the “Should have” of my MoSCoW list.

Honestly, the time it has taken to create this blog post is probably on par with the prototyping time. The use of DoozyUI and EasySave has sped up the process and incredible amount, which now lets me focus on some other parts of the app that need their due diligence. Now UI Design and Step implementation can be tackled with minimal interference!

Until then!

DEV Blog: Spirit Cleanser Game Mechanics

All the best games are easy to learn and difficult to master. They should reward the first quarter and the hundredth.

— Atari founder Nolan Bushnell, on the subject of video game design.

This particular aphorism is oft repeated, yet it’s still a statement I agree with. As a designer, one of my main goals is to create mechanics that are accessible to everyone, but those who truly understand their limits can push the ceiling as high it can go.

Let’s use Overwatch as an example. Overwatch is a team-based FPS shooter, with a character called Lucio. Lucio has the ability to ride walls, and speed boost his team. Not too difficult to understand. I thought I grasped the idea pretty well. But then you have professional streamers: they show complete mastery of these elements, using them to be untouchable murder machines riding rings around the enemy team.

Wall riding can be difficult to learn…
A great Lucio example
But you can use it to great effect.

That’s why in developing this prototype, my intention is to have game mechanics that can be combined for riveting gameplay. I’ve been calling it “Non fare malocchio”, since it’s inspired from the Italian folklore about the evil eye, but “Spirit Cleanser” is a bit more to the point. Essentially, in this game you can project your spirit out into the world to “possess” and “cleanse” evil spirits from enemies.

There are three core mechanics to the gameplay that can be combined: spirit projection/dashing, enemy cleansing, and spirit leaping. Here’s how they work, and how they complement each other:

Spirit Projection/Dashing

The player project their spirit and traverse the world with it, leaving their body behind. When projecting the spirit, it will dash out into the world at a fast speed. It can be projected in all directions.

Spirit Projection
The player can project their spirit.

Enemy Cleansing

If the spirit collides with an enemy while dashing, it will “possess” them, and be able to “cleanse” them. This is done through a series of button presses.

Enemy Cleansing
The player’s spirit can cleanse enemies.

Spirit Leaping

To reunite the body and spirit, the body leaps from its current position to the spirit. If the leap button is held down, extra momentum will be added to the body once the spirit is reconnected.

Spirit Leaping
The player’s body can leap back to the spirit.

Combining the spirit dashing, enemy cleansing, and leaping to spirit mechanics makes for a game experience that is easy to learn yet difficult to master.

The game mechanics combined
Combining these elements makes for complex and rewarding gameplay.

I’m hoping that these base mechanics are enough to carry my idea while I flesh out the prototype more. No doubt, they will need to be revisited, and then revisited again, but for now the building blocks of what I’m trying to achieve are in a playable scene: that ticks the box for me.

My next goal is to create some more enemies to really test the mechanics and create captivating game moments! These elements seem to be fun now, but they have to hold up to a multitude of different enemies and environments.

Until next time!