As a developer, you’re almost perpetually flat to the boards. There is always too much to do, too many fires, too many needed fixes. You can’t create more time in the day, or more days in the week. But you can do the next-best thing: eliminate repetitive tasks using automation. Welcome to the wonderful world of jidoka.
Previously on “Game Planning With Science!”: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10 Part 11
Special thanks to Tom Ketola for his feedback and suggestions on this post.
By Reading This Post, You’ll Learn
- What jidoka is
- How you can apply it to game development using automated testing
- The various forms of automated testing
- The value of continuous integration
- Using automated crash reporting tools as another form of jidoka
Jidoka (自働化)
Jidoka translates as “autonomation” or, more literally, “automation with a human touch”. In practical terms, it means automated processes that can assess quality and take action if necessary. At Toyota, jidoka systems scan cars as they move along the production line. If the systems detect any defects, they alert their human overlords. If a defect is severe enough, the jidoka bots can actually stop the production line entirely.
Jidoka for Game Development
The closest analog for jidoka in game development (and software development in general) is automated testing in all its forms. When considering automated testing, it’s helpful to think about the relative strengths of computers versus human brains. As a computer science professor once put it when I was in college, computers are stupid, but they’re never wrong. They will precisely execute whatever you give them to execute at ever increasing processing speeds. Humans make mistakes and work at a snail’s pace relative to computers. But we’re really good at recognizing patterns, even highly abstract ones. Our brains are wired to find the signal in the noise.
For instance, think how many different forms tables take. Some have four legs, some have more, some less. There are round tables and circular tables. Tables for playing cards, tables for eating, tables for surgery. Short tables and tall tables and standing tables. But how do you differentiate those tables from desks, or benches, or stools? Simple: you recognize the pattern of attributes that signifies “table”. Same thing for a face (even one obscured by distortion or image processing), or differentiating classical music from dub step. For a computer to identify or differentiate those kinds of items takes a lot of code, a lot of error correcting, and a lot of experimentation.
The goal of automated testing is to divvy up work according to strengths. Let computers do what they do best: precisely perform designated tasks quickly and repeatedly. Give them the brute force work. This reduces the testing load on us humans and allows us to do what we do best: think creatively about how to break builds and find issues that are too nuanced for a computer to find programmatically.
Unit Tests
Unit tests are the most micro form of automated testing. They check code at the level of methods and functions. Provide each function with a discrete input and verify the discrete output. In short, the goal of running unit tests is to ensure that each method or function is providing output within an expected range.
A unit test for a given function is seeking to verify that it’s accepting the inputs it needs to and providing the outputs it’s supposed to. Often unit tests are just for the engineer writing the code.
Integration Tests
Once unit tests verify that the individual functions and methods are performing as expected, integration tests verify that those methods and functions are interacting properly. In other words, you’re verifying that they are properly integrated. The goal of integration testing is to ensure that the submission is behaving at the programmatic level.
Functional Tests
Functional tests check that the submitting code is behaving properly form the end user perspective. How is the whole system functioning, with particular attention paid to the points where different modules interact. Basically, do you have a well-oiled machine, or are your gears grinding into each other.
Functional tests can be designed to verify specific systems in a bespoke or adhoc fashion. For instance, a scene that launches simple geometry instances at each other to test collision detection or other simulated physics. Or a dedicated test to verify that submitted art assets fall within established technical constraints (poly counts, bone counts, texture size, etc).
On the other hand, an example of functional testing at the macro level is a bot to play the game for you. When I was at Wideload Games, working on Avenger’s Initiative (an Infinity Blade clone featuring the Incredible Hulk), my friend Nick put together a friendly bot we named “Auto-Hulk”. And Auto-Hulk just played the game. He played it perfectly, not to mention faster than any human ever could. Endlessly for weeks. And that green-skinned bastard tripped over so many defects we would have had to find through manual testing.
Regression Tests
Much as with the human-oriented version, automated regression testing inspects pieces of code that have previously been tested to ensure that changes elsewhere in the code base haven’t created new problems. This can involve re-running units tests on sections of code once you’ve merged them into the codebase. Alternatively, automated regression testing can involve scripts that check for issues found by human testing to ensure that, once those issues are fixed, they never crop up again. Or, as my friend Tom Ketola puts it, “Once I fix something, I never want to have to fix it again.”
Another useful aspect of regression testing is that it can seed the lower level unit, integration, and functional testing suites. Got an issue that regularly pops up in code submissions? Instead of constantly checking for it in regression tests, front load it in the unit/integration/functional tests as appropriate. In other words, catch it as early as possible.
Resources That Informed And Influenced This Post
If you have ad blockers turned on, you may not see the above links. Breaking The Wheel is part of Amazon’s Affiliate Program. If you purchase products using the above links, the blog will get a portion of the sale (at no cost to you).
Continuous Integration
Beyond topics that are directly analogous, we also have practices that are in keeping with the spirit of jidoka – tools that put brute force operations onto machines. Top of my list would be continuous integration. As a general rule, batching is bad from an operations standpoint. Large batches are even worse. I’ll cover this in more detail in my post on heijunka, but, this notion includes batching large numbers of changes into the same build push.
Put simply, the more changes you include in a new build, the harder it will be to sort out the culprit if it breaks.
On the other hand, if you ensure that developers are regularly submitting changes to the build (rather than holding onto a large bundle of changes), and your build machine is automatically generating and verifying builds regularly throughout the day, it becomes much easier to find any offending submissions.
Automated Crash Reporting Tools
Another example of effective jidoka for games is an automated crash reporting tool. There’s nothing worse than smashing your head into a wall for hours or days on end trying to reproduce a rare but critical defect. But if your build can fire off a crash report before a the executable dies, if it can give you some kind of forensic record of what was happening when the failure occurred, you can some of the unknowns from the equation and narrow your search.
That All Sounds Expensive
It is. But it, as I said in Part 9, don’t just ask what it costs. Ask what it saves. The more you leverage automation, the more time you can free up to deal with problems computers can’t solve. And the earlier you implement these procedures, the easier they are to put in place and the more cumulative time you can save you over the length of a project.
Jidoka’s Impact on Muda
If we look at the seven forms of muda from Part 9, jidoka has a strong impact on two of them: defects and build times. Jidoka won’t stop defects from occurring, but it will catch them earlier, meaning you can fix them when it’s cheapest: as soon as possible (see: “The Time Value of Fixes“). Continuous integration, on the other, can reduce build times indirectly. By outsourcing builds to a dedicated, automated machine, you free up bandwidth for the developers who would otherwise have to push builds locally. And creating builds more often means the list of changes is smaller, which means that recovering from a broken build will be faster.
Further Reading If You Enjoyed This Post
The Time Value of Fixes, Or: A Fix In The Hand Is Worth Two In The Bush
User Stories Make For Better Consensus
Where Do We Go From Here?
We’ve taken the time to reduce human error using poka-yoke. We’re utilizing kanban to reduce our work-in-progress and improve our flow time efficiency. Now we’re leveraging scripts to automate the brute force elements of testing. The next step on our journey is establishing a lean, disciplined approach to QA testing.
Key Takeaways
- Jidoka refers to automation that is capable of making decisions and/or providing meaningful feedback
- In the context of game and software development, jidoka comes in a few forms
- Unit test check individual classes, methods, and functions to ensure they are behaving as expected
- Integration tests build on unit tests by ensuring that individual functions work together as expected
- Functional testing ensures that submissions behave as expected from an end user perspective
- Regression testing repeats unit tests on the code base and ensures that previously resolved defects don’t re-occur
- Continuous integration allows developers to merged more often
- Automated crash reporting tools are another form of jidoka
If You Enjoyed This Post, Please Share It!
Return to the “Game Planning With Science” Table of Contents
“Jidoka: Putting The Robots To Work – Game Planning With Science! Part 12” by Justin Fischer is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
1 comment
Pingback: Heijunka: Why Batching Is Not Your Friend - Game Planning With Science, Part 14