Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 84 additions & 0 deletions docs/contributing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -253,3 +253,87 @@ git commit -m '[docit] update api docs' --allow-empty
[fork]: https://github.com/alphaville/optimization-engine
[API guidelines]: https://rust-lang-nursery.github.io/api-guidelines/about.html
[API checklist]: https://rust-lang-nursery.github.io/api-guidelines/checklist.html

## Running tests locally

If you are working on the Python interface (`opengen`) or the website/docs,
it is best to use a dedicated Python virtual environment.

### Set up a virtual environment

From within `open-codegen/`, create and activate a virtual environment:

```bash
cd open-codegen
python3 -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip
pip install -e .
```

If you plan to run the benchmark suite as well, install the extra dependency:

```bash
pip install pytest-benchmark[histogram]
```

### Run the Rust tests

From the repository root, run:

```bash
cargo test
```

This will run all unit tests, including the examples in the docstrings.
To run only the library unit tests, do:

```bash
cargo test --lib
```

If you want a faster compile-only check, you can also run:

```bash
cargo check
```

### Run the Python and code-generation tests

From within `open-codegen/`, run the following tests after you activate `venv`

```bash
# Activate venv first
python -W ignore test/test_constraints.py -v
python -W ignore test/test.py -v
python -W ignore test/test_ocp.py -v
```

The ROS2 tests should normally be run from an environment where ROS2 is already
installed and configured, for example a dedicated `micromamba` environment.
They should not be assumed to run from the plain `venv` above unless that
environment also contains a working ROS2 installation together with `ros2` and
`colcon`.

For example:

```bash
cd open-codegen
micromamba activate ros_env
pip install .
python -W ignore test/test_ros2.py -v
```

If ROS2 is not installed locally, you can still run the rest of the Python
test suite.

### Run linting and extra checks

From the repository root, it is also useful to run:

```bash
cargo clippy --all-targets
```

Before opening a pull request, please run the tests that are relevant to the
part of the codebase you changed and make sure they pass locally.
133 changes: 47 additions & 86 deletions docs/openrust-arithmetic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,132 +4,110 @@ title: Single and double precision
description: OpEn with f32 and f64 number types
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

:::note Info
The functionality presented here was introduced in OpEn version [`0.12.0`](https://pypi.org/project/opengen/#history).
The new API is fully backward-compatible with previous versions of OpEn.
The functionality presented here was introduced in OpEn version [`0.12.0`](https://crates.io/crates/optimization_engine/0.12.0-alpha.1).
The new API is fully backward-compatible with previous versions of OpEn
with `f64` being the default scalar type.
:::

## Overview

OpEn's Rust API supports both `f64` and `f32`.

Most public Rust types are generic over a scalar type `T` with `T: num::Float`, and in most places the default type is `f64`. This means:

- if you do nothing special, you will usually get `f64`
- if you want single precision, you can explicitly use `f32`
- all quantities involved in one solver instance should use the same scalar type

In particular, this applies to:

- cost and gradient functions
- constraints
- `Problem`
- caches such as `PANOCCache`, `FBSCache`, and `AlmCache`
- optimizers such as `PANOCOptimizer`, `FBSOptimizer`, and `AlmOptimizer`
- solver status types such as `SolverStatus<T>` and `AlmOptimizerStatus<T>`

## When to use `f64` and when to use `f32`

### `f64`

Use `f64` when you want maximum numerical robustness and accuracy. This is the safest default for:
OpEn's Rust API now supports both `f64` and `f32`. Note that with `f32`
you may encounter issues with convergence, especially if you are solving
particularly ill-conditioned problems. On the other hand, `f32` is sometimes
the preferred type for embedded applications and can lead to lower
solve times.

- desktop applications
- difficult nonlinear problems
- problems with tight tolerances
- problems that are sensitive to conditioning
When using `f32`: (i) make sure the problem is properly scaled,
and (ii) you may want to opt for less demanding tolerances.

### `f32`
## PANOC example

Use `f32` when memory footprint and throughput matter more than ultimate accuracy. This is often useful for:
Below you can see two examples of using the solver with single and double
precision arithmetic.

- embedded applications
- high-rate MPC loops
- applications where moderate tolerances are acceptable
<Tabs>

In general, `f32` may require:

- slightly looser tolerances
- more careful scaling of the problem
- fewer expectations about extremely small residuals

## The default: `f64`

If your functions, constants, and vectors use `f64`, you can often omit the scalar type completely.
<TabItem value="using-f32" label="Single precision">

```rust
use optimization_engine::{constraints, panoc::PANOCCache, Problem, SolverError};
use optimization_engine::panoc::PANOCOptimizer;

let tolerance = 1e-6;
let tolerance = 1e-4_f32;
let lbfgs_memory = 10;
let radius = 1.0;
let radius = 1.0_f32;

let bounds = constraints::Ball2::new(None, radius);

let df = |u: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {
grad[0] = u[0] + u[1] + 1.0;
grad[1] = u[0] + 2.0 * u[1] - 1.0;
let df = |u: &[f32], grad: &mut [f32]| -> Result<(), SolverError> {
grad[0] = u[0] + u[1] + 1.0_f32;
grad[1] = u[0] + 2.0_f32 * u[1] - 1.0_f32;
Ok(())
};

let f = |u: &[f64], cost: &mut f64| -> Result<(), SolverError> {
*cost = 0.5 * (u[0] * u[0] + u[1] * u[1]);
let f = |u: &[f32], cost: &mut f32| -> Result<(), SolverError> {
*cost = 0.5_f32 * (u[0] * u[0] + u[1] * u[1]);
Ok(())
};

let problem = Problem::new(&bounds, df, f);
let mut cache = PANOCCache::new(2, tolerance, lbfgs_memory);
let mut cache = PANOCCache::<f32>::new(2, tolerance, lbfgs_memory);
let mut optimizer = PANOCOptimizer::new(problem, &mut cache);

let mut u = [0.0, 0.0];
let mut u = [0.0_f32, 0.0_f32];
let status = optimizer.solve(&mut u).unwrap();
assert!(status.has_converged());
```
</TabItem>

Because all literals and function signatures above are `f64`, the compiler infers `T = f64`.

## Using `f32`

To use single precision, make the scalar type explicit throughout the problem definition.
<TabItem value="default-f64" label="Double precision" default>

```rust
use optimization_engine::{constraints, panoc::PANOCCache, Problem, SolverError};
use optimization_engine::panoc::PANOCOptimizer;

let tolerance = 1e-4_f32;
let tolerance = 1e-6;
let lbfgs_memory = 10;
let radius = 1.0_f32;
let radius = 1.0;

let bounds = constraints::Ball2::new(None, radius);

let df = |u: &[f32], grad: &mut [f32]| -> Result<(), SolverError> {
grad[0] = u[0] + u[1] + 1.0_f32;
grad[1] = u[0] + 2.0_f32 * u[1] - 1.0_f32;
let df = |u: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {
grad[0] = u[0] + u[1] + 1.0;
grad[1] = u[0] + 2.0 * u[1] - 1.0;
Ok(())
};

let f = |u: &[f32], cost: &mut f32| -> Result<(), SolverError> {
*cost = 0.5_f32 * (u[0] * u[0] + u[1] * u[1]);
let f = |u: &[f64], cost: &mut f64| -> Result<(), SolverError> {
*cost = 0.5 * (u[0] * u[0] + u[1] * u[1]);
Ok(())
};

let problem = Problem::new(&bounds, df, f);
let mut cache = PANOCCache::<f32>::new(2, tolerance, lbfgs_memory);
let mut cache = PANOCCache::new(2, tolerance, lbfgs_memory);
let mut optimizer = PANOCOptimizer::new(problem, &mut cache);

let mut u = [0.0_f32, 0.0_f32];
let mut u = [0.0, 0.0];
let status = optimizer.solve(&mut u).unwrap();
assert!(status.has_converged());
```
</TabItem>

</Tabs>

The key idea is that the same scalar type must be used consistently in:
To use single precision, make sure that the following are all using `f32`:

- the initial guess `u`
- the closures for the cost and gradient
- the constraints
- the cache
- any tolerances and numerical constants
- You are explicitly using `PANOCCache::<f32>` as in the above example

## Example with FBS

Expand Down Expand Up @@ -175,7 +153,7 @@ For example, if you use:

then the whole ALM solve runs in single precision.

If instead you use plain `f64` literals and `&[f64]` closures, the solver runs in double precision.
If instead you use plain `f64` literals and `&[f64]` closures, the solver runs in double precision. This is the default behaviour.

## Type inference tips

Expand All @@ -188,28 +166,11 @@ Good ways to make `f32` intent clear are:
- annotate caches explicitly, for example `PANOCCache::<f32>::new(...)`
- annotate closure arguments, for example `|u: &[f32], grad: &mut [f32]|`

## Important rule: do not mix `f32` and `f64`

The following combinations are problematic:
:::warning Important rule: do not mix `f32` and `f64`
For example, the following combinations are problematic:

- `u: &[f32]` with a cost function writing to `&mut f64`
- `Ball2::new(None, 1.0_f64)` together with `PANOCCache::<f32>`
- `tolerance = 1e-6` in one place and `1e-6_f32` elsewhere if inference becomes ambiguous

Choose one scalar type per optimization problem and use it everywhere.

## Choosing tolerances

When moving from `f64` to `f32`, it is often a good idea to relax tolerances.

Typical starting points are:

- `f64`: `1e-6`, `1e-8`, or smaller if needed
- `f32`: `1e-4` or `1e-5`

The right choice depends on:

- scaling of the problem
- conditioning
- solver settings
- whether the problem is solved repeatedly in real time
:::
23 changes: 15 additions & 8 deletions docs/openrust-basic.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,13 @@ The definition of an optimization problem consists in specifying the following t
- the set of constraints, $U$, as an implementation of a trait

### Cost functions

:::note Info
Throughout this document we will be using `f64`, which is the default
scalar type. However, OpEn now supports `f32` as well.
:::


The **cost function** `f` is a Rust function of type `|u: &[f64], cost: &mut f64| -> Result<(), SolverError>`. The first argument, `u`, is the argument of the function. The second argument, is a mutable reference to the result (cost). The function returns a *status code* of the type `Result<(), SolverError>` and the status code `Ok(())` means that the computation was successful. Other status codes can be used to encode errors/exceptions as defined in the [`SolverError`] enum.

As an example, consider the cost function $f:\mathbb{R}^2\to\mathbb{R}$ that maps a two-dimensional
Expand All @@ -33,8 +40,8 @@ vector $u$ to $f(u) = 5 u_1 - u_2^2$. This will be:

```rust
let f = |u: &[f64], c: &mut f64| -> Result<(), SolverError> {
*c = 5.0 * u[0] - u[1].powi(2);
Ok(())
*c = 5.0 * u[0] - u[1].powi(2);
Ok(())
};
```

Expand All @@ -50,9 +57,9 @@ This function can be implemented as follows:

```rust
let df = |u: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {
grad[0] = 5.0;
grad[1] = -2.0*u[1];
Ok(())
grad[0] = 5.0;
grad[1] = -2.0*u[1];
Ok(())
};
```

Expand Down Expand Up @@ -291,9 +298,9 @@ fn main() {
}
};

// define the bounds at every iteration
let bounds = constraints::Ball2::new(None, radius);
// define the bounds at every iteration
let bounds = constraints::Ball2::new(None, radius);

// the problem definition is updated at every iteration
let problem = Problem::new(&bounds, df, f);

Expand Down
11 changes: 9 additions & 2 deletions docs/python-ros2.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ In ROS2, functionality is organised in **nodes** which exchange data by publishi

OpEn can generate ready-to-use ROS2 packages directly from a parametric optimizer. The generated package exposes the optimizer as a ROS2 node, includes the required message definitions, and provides the files needed to build, configure, and launch it inside a ROS2 workspace.

The input and output messages are the same as in the [ROS1 package documentation](./python-ros#messages).
The input message matches the [ROS1 package documentation](./python-ros#messages). The ROS2 output message additionally includes `error_code` and `error_message` fields so that invalid requests and solver failures can be reported with more detail.

## Configuration Parameters

Expand Down Expand Up @@ -180,6 +180,8 @@ solution:
inner_iterations: 41
outer_iterations: 6
status: 0
error_code: 0
error_message: ''
cost: 1.1656771801253916
norm_fpr: 2.1973496274068953e-05
penalty: 150000.0
Expand All @@ -198,11 +200,14 @@ solve_time_ms: 0.2175
uint8 STATUS_NOT_CONVERGED_OUT_OF_TIME=2
uint8 STATUS_NOT_CONVERGED_COST=3
uint8 STATUS_NOT_CONVERGED_FINITE_COMPUTATION=4
uint8 STATUS_INVALID_REQUEST=5

float64[] solution # solution
uint8 inner_iterations # number of inner iterations
uint16 outer_iterations # number of outer iterations
uint8 status # status code
uint8 status # coarse status code
int32 error_code # detailed error code (0 on success)
string error_message # detailed error message (empty on success)
float64 cost # cost value at solution
float64 norm_fpr # norm of FPR of last inner problem
float64 penalty # penalty value
Expand All @@ -213,6 +218,8 @@ solve_time_ms: 0.2175
```
</details>

If the request is invalid, the node publishes a result with `status: 5` (`STATUS_INVALID_REQUEST`) and fills `error_code` and `error_message`. For example, if the parameter vector has the wrong length, `error_code` is `3003` and `error_message` explains the mismatch.

Instead of starting the node with `ros2 run`, you can also use the generated launch file:

```bash
Expand Down
Loading