Skip to main content

๐Ÿฆ‹ Log Replay Comparison

FRC teams have access to multiple logging tools that feature "replay" capabilities. These fall into the categories of deterministic replay (AdvantageKit, PyKit) and nondeterministic replay (Hoot Replay). Each type of replay framework offers significantly different capabilities with regard to determinism, playback functionality, and code structure. This page compares these tools to help teams understand their key differences.

note

Many non-replay logging options are also available (such as WPILib data logging and Epilogue), but this page focuses exclusively on replay-compatible logging tools.

๐Ÿ”’ Determinismโ€‹

The biggest difference between replay frameworks the ability of each tool to replay robot code logic in a way that is consistent, trustworthy, and robust to timing inconsistency.

Deterministic (AdvantageKit, PyKit)Non-Deterministic (Hoot Replay)
The replayed robot code will always match the behavior of the real robot. The results of replay can be trusted completely to match the actual behavior of the robot.No guarantees are made about the accuracy of replay in simulation. Data may arrive in replay at different times or at different rates than the real robot, which impacts the accuracy of all parts of the robot code.

Determinism has a major impact on the practicality of log replay, since running simulation faster than real-time is a core part of the debugging process in practice. The accuracy of deterministic replay is unaffected by replay speed, while the accuracy of non-deterministic replay decreases when running at faster rates.

Why Does It Matter?โ€‹

We are often asked by teams why they should care about deterministic replay. Non-deterministic replay creates butterfly effects that severely impact the accuracy of replay.

๐Ÿฆ‹ The Butterfly Effect ๐Ÿฆ‹

The butterfly effect describes how small differences in the inputs to a complex system (like robot code) have ripple effects that can significantly impact the system's behavior in the future. Minor differences in inputs can have a much larger effect on outputs than one might intuitively expect.

The sequence below provides a simple example of how non-deterministic inputs can impact important parts of the robot code:

  1. A vision measurement from a camera is lost or delayed due to non-deterministic replay.
  2. When combined with odometry data in a pose estimator, the estimated pose of the robot is incorrect for one or more loop cycles.
  3. An auto-align command waits for the robot to be within tolerance before scoring. This is a precise operation where errors of less than a centimeter can have a major impact.
  4. The driver presses a button to score just after the real robot is within tolerance. Since the replayed robot's pose is inaccurate, the auto-score command rejects the button input in replay (even though it was accepted on the real robot).
  5. The superstructure of the robot is now being commanded to a different state on the real robot and in replay, since only the real robot continues the scoring operation.
  6. Setpoints to individual mechanisms are now drastically different between the real robot and replay, and do not match the inputs (e.g. encoders) provided to replay. Any tolerance checking of mechanisms is likely to be nonfunctional for the rest of the replay.
  7. Future control inputs will not be correctly obeyed in replay, since the states of many commands and subsystems no longer match the real robot.

This scenario may seem specific, but similar divergences are almost inevitable when replaying robot code of moderate complexity. The testing in the next section demonstrates what this effect looks like in practice.

To demonstrate the impact of deterministic replay, the graphics below show real log data from Team 6328's 2025 robot. To represent each category of replay, this data is based on AdvantageKit's deterministic replay (running ~50x faster than real-time) and a close approximation of Hoot Replay (running <5x faster than real-time).

First, the image below shows a few key fields from the AdvantageKit replay. Outputs from the real robot are in blue ๐Ÿ”ต and outputs from AdvantageKit replay are in yellow ๐ŸŸก. The line graph shows the commanded setpoint of the elevator. From bottom to top, the discrete fields show the enabled state, superstructure state, and whether the robot is in tolerance for scoring. Every field displayed here is an exact match between real and replay, providing complete trust in the accuracy of the data.

AdvantageKit Replay

By contrast, the image below shows the same fields with a close approximation of Hoot Replay running 5x faster than real-time. This is still about 10x slower than AdvantageKit and largely impractical for real debugging workflows. The example shown here is also a best case scenario which includes extensive modifications to the code that compensate for the difference in replay speed.

Outputs from the real robot are in blue ๐Ÿ”ต and outputs from an approximation of Hoot Replay are in green ๐ŸŸข. Within a few seconds of starting the autonomous routine, the state of the robot has completely diverged between real and replay due to the butterfly effect. This significantly reduces the value of the log data for debugging, as it no longer resembles the original behavior.

Hoot Replay (5x, modified)

Keep in mind that replayed outputs are most useful when the equivalent values were not recorded by the real robot (i.e. there is no reference point to verify accuracy). For that critical use case, there is no way to distinguish accurate outputs from the inaccurate, diverged outputs shown above. This undermines the core purpose of replay, as the outputs cannot be trusted for debugging.

More Details

The graphs above shows the results of replaying 5x faster than real-time with additional modifications to compensate for loop cycle time, though these changes would not be part of a typical robot project. We have provided several other test cases to demonstrate the impact of different replay settings:

5x faster than real-time, typical robot project:

Hoot Replay (5x, unmodified)

2x faster than real-time, typical robot project:

Hoot Replay (2x, unmodified)

2x faster than real-time, compensated:

Hoot Replay (2x, modified)

Note that even the very best case shown in the last graph still breaks down completely midway through the match, and is unable to replay critical fields like the auto scoring tolerance.

What about other fields?

It is true that some fields are more affected by replay inaccuracy than others. For example, the graph below compares the X position of drive odometry between the real robot and Hoot Replay running 5x faster than real-time. Odometry is only affected by the drive motors, so it is less subject to the butterfly effect than other parts of the code (though it still drifts several feet by the end of the match).

Log replay is most helpful when untangling complex code logic that is nontrivial to recreate without the full set of input data, as demonstrated even in our simplest examples. Odometry data and other trivial fields serve as a partial exception to the butterfly effect, but (as noted above) the lack of reference points when running replay in practice means that it is never possible to distinguish non-deterministic outputs that are slightly inaccurate (odometry) from the majority of outputs that are completely inaccurate.

Odometry: Hoot Replay (5x, modified)

What about skipping in time?

The section below explains why rapid iteration and running faster than real-time are critical to any replay workflow, which is why the examples above demonstrate the impact of running Hoot Replay faster than real-time. However, one could also start the replay at a later point in the log file to work around the slow speed of non-deterministic replay.

The graph below demonstrates why this approach is ineffective, by skipping to the middle of teleop before running simulated Hoot Replay (2x faster than real-time with loop cycle compensation). Even in this best-case scenario for Hoot Replay running at only 2x speed, the replay is completely unable to match the real outputs. Skipping large parts of the log massively increases the impact of the butterfly effect by completely changing the set of inputs accessible to the replayed code. One should not expect to see accurate outputs at any speed unless all of the inputs are accounted for during replay.

Skipping: Hoot Replay (2x, modified)

๐Ÿ’จ Rapid Iterationโ€‹

Log replay can be used in a variety of environments, which take advantage of the ability to rapidly iterate on code or debug issues without access to the robot. Here are a few examples where replay can play a critical role in the debugging process:

  • Debugging complex logic issues between matches without access to a practice field.
  • Retuning an auto-score tolerance in the pits based on data from the last match.
  • Testing a variety of vision filtering techniques between in-person meetings.
  • Remotely debugging issues for a team by repeatedly logging additional outputs.
  • Generating outputs after every match that are too complex to run on the RoboRIO.

Every one of these use cases depends on being able to run replay faster than real-time. A typical match log may be 10 minutes long, and a replay feature that takes 10 minutes to run is not practical in any of these scenarios. Whether log replay is used under time pressure at an event or at home for rapid debugging, quickly running multiple replays with different outputs or tuning parameters is absolutely core to its utility.

Comparisonโ€‹

AdvantageKit/PyKitHoot Replay
โœ… Run as fast as possible (e.g. ~50x real-time)โŒ Accuracy decreases with faster speeds
โœ… Replay Watch for fast iterationโŒ Replay process is fully manual
โœ… Pull and push logs directly to AdvantageScopeโŒ Manual file management, multiple logs per match

Deterministic replay means that accuracy is unaffected by the replay speed. Running replay ~50 times faster than real-time is common, which means that a 10 minute match log can be replayed in just 12 seconds. AdvantageKit is designed to make rapid iteration as painless as possible through features like Replay Watch and integration with AdvantageScope; just open a log, run replay, and see the results with no manual log management required.

By contrast, Hoot Replay's non-deterministic approach presents users with difficult trade-offs between accuracy and practicality. Running at just 5x speed already has a major impact on accuracy while still taking a full 2 minutes per replay iteration. Non-determinism makes replay more difficult to use in the high-pressure scenarios where it matters the most.

The video below demonstrates what the difference in speed between deterministic and non-deterministic replay looks like in practice on a short 5:48 match log. Several replays of the same log are synchronized and shown in real-time.

๐Ÿงฑ Code Structureโ€‹

While Hoot Replay involves significant trade-offs, its core design goal is to "simplify" hardware interactions. Unlike AdvantageKit, some subsystems may be compatible with Hoot Replay while using CTRE's standard subsystem structure (combining high-level logic, hardware configuration, low-level controls, and simulation in a single class).

Subsystems under Hoot Replay fall into the two categories shown below. Note that users must select a single CAN bus to replay, which means that many subsystems using entirely CTRE devices are not natively compatible with Hoot Replay. For subsystems that are not natively compatible, every input must be manually logged and replayed.

Natively CompatibleManual Logging
  • CTRE devices on the replayed CAN bus
  • All other CTRE devices
  • Non-CTRE devices
  • Non-CAN sensors (e.g. RIO data)
  • Network devices (e.g. Limelight, PhotonVision)
  • Dashboard inputs (e.g. auto choosers)

Hardware Abstraction vs. Data Injectionโ€‹

All replay frameworks sometimes require users to use alternative structures that maintain compatibility with replay. AdvantageKit and PyKit build all subsystems around hardware abstraction, which provides a clean separation between parts of the code logic that must be isolated: high-level logic, simulation, and replayed code are never able to interact in unintended ways.

The table below compares the implications of this structure against Hoot Replay's approach:

Hardware Abstraction
(AdvantageKit, PyKit)
Data Injection
(Hoot Replay)
Code StructureThe functions of each subsystem are divided into several smaller classes.All functions of the subsystem are combined into a single large class.
Templatesโœ… AdvantageKit provides template projects for many subsystems including swerve drives and vision systems (compatible with several vendors).โš ๏ธ Minimal examples are provided. No template projects for subsystems with manual logging.
Data Flowโœ… Data flow is well-defined to ensure clean separation between real, replay, and sim modes.โŒ All data is accessible to all parts of the subsystem. Careful planning and frequent testing is required to ensure that modes are well-separated.
Input Loggingโœ… Error-free logging of a large number of inputs is facilitated by annotation and record logging.โŒ Each new input field requires several lines of additional boilerplate, which can easily cause subtle issues during replay if implemented incorrectly.
Dashboardsโœ… Convenience classes are provided to simplify the process of using dashboard inputs.โŒ All data must be logged manually by the user, even outside of subsystems.

Example: Vision Subsystemโ€‹

The code below represents a feature-complete Limelight vision subsystem built with both AdvantageKit (hardware abstraction) and Hoot Replay (data injection):

  • The AdvantageKit version creates clean separation between the different components of the vision system, making each class easier to understand and debug. The hardware interface with automatic logging enforces clear and correct data flows by default. Annotation, record, and enum logging also allow complex data types to be logged with minimal effort, as shown in the VisionIOInputs class below.
  • The Hoot Replay version combines all of the functionality in a single class, with manual hooks to read and write data for each input field. Note that there is no obvious separation between the the replayed and non-replayed parts of the code, making it easy to read from invalid data sources or replay data incorrectly. The minimal utilities for logging complex types also result in a confusing structure for input data.

Vision hardware interface (30 lines)

public interface VisionIO {
@AutoLog
public static class VisionIOInputs {
public boolean connected = false;
public TargetObservation latestTargetObservation =
new TargetObservation(Rotation2d.kZero, Rotation2d.kZero);
public PoseObservation[] poseObservations = new PoseObservation[0];
public int[] tagIds = new int[0];
}

/** Represents the angle to a simple target, not used for pose estimation. */
public static record TargetObservation(Rotation2d tx, Rotation2d ty) {}

/** Represents a robot pose sample used for pose estimation. */
public static record PoseObservation(
double timestamp,
Pose3d pose,
double ambiguity,
int tagCount,
double averageTagDistance,
PoseObservationType type) {}

public static enum PoseObservationType {
MEGATAG_1,
MEGATAG_2,
PHOTONVISION
}

public default void updateInputs(VisionIOInputs inputs) {}
}

๐Ÿ“‹ Miscellaneousโ€‹

The table below provides an overview of the differences between each replay tool. Note that some of the restrictions of Hoot Replay can be addressed via complex manual logging as discussed above.

AdvantageKitPyKitHoot Replay
Accuracyโœ… Deterministicโœ… DeterministicโŒ Non-deterministic
Rapid Iterationโœ… Replay at any speedโœ… Replay at any speedโŒ Accuracy decreases with speed
Code Structureโœ… Hardware abstraction + automatic loggingโœ… Hardware abstraction + automatic loggingโŒ Manual data injection
Vendorโœ… No restriction + templates for multiple vendorsโœ… No restrictionโŒ Vendor-locked to CTRE devices
CAN Busesโœ… No restrictionโœ… No restrictionโŒ Requires a single CAN bus
FRC LanguagesJavaPythonJava, Python, C++
PricingFree & Open SourceFree & Open Source๐Ÿ’ฐ Subscription: Requires Phoenix Pro
Users in 2025598 teamsNA<10 teams
note

The number of AdvantageKit users is based on official usage reporting data published by FIRST. The number of Hoot Replay users is estimated based on a search of public GitHub repositories using Hoot Replay and the percentage of all teams that publish code on GitHub.