Modeling Approach
Overview of our forecast modeling approach.
Modeling principles
Modo's forecasting principles describe the values and design philosophy used while building our models.
1. Credible
A forecast is a trust product; its value is derived from how much our users trust our judgment. This starts with us being intellectually honest. At its core, this means at every step of the way we ask the question: "Would I buy a forecast with this assumption/design choice/modeling methodology?"
2. Integrity-focussed
People should buy our forecast because they want our view on the market. That means we don't target specific outputs. We don't target numbers presented by other forecast providers. We don't target outputs that our customers want. We don't target results that align with our own biases.
3. Transparent
Trust in a forecasting product is earned, not implicit. A great way to foster trust is to allow our users to understand all the assumptions that underpin the model, how well it performs historically, our forecasting principles, and more. A user request for information is always met with "Yes!".
4. Integrated
A key value add for our forecast is how it integrates with your workflow; our job is to allow you to make more informed decisions as quickly as possible. When making product decisions, we don't just ask, "How will this improve the forecast?" we ask, "How will this delight users?".
Tooling & Architecture
Our power market model is built entirely in-house, from the ground up. We don’t rely on any third-party modeling software — giving us full control over the model structure, assumptions, and workflows. This allows us to prioritize transparency, adaptability, and consistency across everything we build.
The model is written entirely in Python, making it easy to read, audit, and extend. We use a familiar and robust ecosystem of open-source packages, which keeps the tooling approachable and well-integrated with our broader data infrastructure.
At the core of our optimization engine is Linopy, a Python-native modeling language. Linopy lets us define constraints in a clear and flexible way, while remaining solver-agnostic. This means we can focus on building good models without worrying about low-level solver syntax — and can switch solvers as needed for performance or licensing.
All modeled regions share the same core model. This architecture brings a high level of consistency in how we simulate different power markets, while still allowing for region-specific inputs and customizations where needed. It also simplifies maintenance, testing, and cross-market analysis.
The model is ran on AWS, efficiently distributing workloads across multiple CPUs by parallelizing the forecast horizon. This approach accelerates client request turnaround and enables rapid internal iterations and improvements.
The model is modular and version-controlled, allowing us to easily test changes, add features, and iterate rapidly — all while keeping a clean record of how the model evolves over time.
Workflows
Beyond the core model logic, we've built supporting infrastructure to make the model reliable, testable, and easy to work with — both day-to-day and over the long term.
Testing & Quality Assurance
We use automated checks throughout the modeling pipeline to catch issues early. This includes:
- Validation of input data (e.g. generator parameters, load shapes)
- Structural checks on model outputs (e.g. ensuring supply equals demand, transmission limits respected)
We validate our model's accuracy with Modo's battery revenue indices:
- Benchmarking against ME BESS historical revenue to assess our models' accuracy, inform our modeling assumptions
Reproducibility & Versioning
All model runs are version-controlled, with a full record of:
- The code and model structure
- Input datasets used
- Any scenario-specific overrides
This makes it easy to rerun past cases or explain exactly why a result changed — a critical feature when models are updated frequently.
Data Abstraction
The model is decoupled from specific input data formats. Raw inputs are cleaned and standardized before being passed into the model. This means we can swap data sources (e.g. switch between ISO data feeds or internal forecasts) without changing the core model logic.
Collaboration & Documentation
We develop the model collaboratively using Git, with clear documentation and commit history. This ensures that model behavior is transparent across the team, and that improvements or changes are easy to review and trace.
Together, these practices help us build not just a technically sound model, but one that’s robust, interpretable, and responsive to the evolving needs of different markets.
Updated about 1 month ago