Skip to content

Conversation

@haydenroche5
Copy link
Contributor

No description provided.

@haydenroche5 haydenroche5 self-assigned this Jul 5, 2023
@haydenroche5 haydenroche5 force-pushed the nf40 branch 2 times, most recently from 3e335a0 to c9138ca Compare July 5, 2023 16:19
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@m-mcgowan m-mcgowan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Was it necessary to train the model on non-counted objects (e.g. pedestrians and cyclists) so these don't produce false positives?

I read through the docs but I'll leave a more thorough docs review in TJ's capable hands. It's a nice writeup and helps demystify ML object detection in an easily digested way.


Although event notes are added to `events.qo` as they happen, these notes are only synced to Notehub periodically. How often this syncing occurs is controlled by an environment variable, `publish_period`, which is covered in the next section.

\* The number of frames requires to start an event is controlled by the variable `event_start_threshold` in `main.py`. It's set to 1 by default. This low value is good for detecting fast moving cars that are only in the shot for a brief time. The number of frames required to end an event is controlled by the variable `event_end_threshold`. It's set to 12 by default, which was empirically determined to be a good value for our setup, as it prevented a single event from being counted multiple times. Feel free to tweak these threshold values based on your situation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
\* The number of frames requires to start an event is controlled by the variable `event_start_threshold` in `main.py`. It's set to 1 by default. This low value is good for detecting fast moving cars that are only in the shot for a brief time. The number of frames required to end an event is controlled by the variable `event_end_threshold`. It's set to 12 by default, which was empirically determined to be a good value for our setup, as it prevented a single event from being counted multiple times. Feel free to tweak these threshold values based on your situation.
The number of frames required to start an event is controlled by the variable `event_start_threshold` in `main.py`. It's set to 1 by default. This low value is good for detecting fast moving cars that are only in the shot for a brief time. The number of frames required to end an event is controlled by the variable `event_end_threshold`. It's set to 12 by default, which was empirically determined to be a good value for our setup, as it prevented a single event from being counted multiple times. Feel free to tweak these threshold values based on your situation.


This is super valuable for evaluating the performance of the model in real time. Additionally, you can monitor serial logs by clicking Serial Terminal at the bottom of the IDE window.

Once you're happy with how the model is performing, you can copy `main.py` over to the board so that it'll run outside the IDE context with this command:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Once you're happy with how the model is performing, you can copy `main.py` over to the board so that it'll run outside the IDE context with this command:
Once you're happy with how the model is performing, you'll want to run it independently of the IDE. Copy `main.py` over to the board with this command:

python pyboard.py -d <serial port> --no-soft-reset -f cp main.py :/
```

Note that for `pyboard.py` to work, you'll need to install [pyserial](https://pypi.org/project/pyserial/) with `pip install pyserial`, if you don't have it installed already. Make sure to replace `<serial port>` with your serial port. Unplug the camera board and plug it back in to reboot the device. `main.py` will start running after boot up.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you be more specific about what should be unplugged? I'm guessing the USB cable, but saves guessing if we call it out explicitly.

I'm also imagining the development PC is no longer required, so it's worth calling out that the development PC is only required while training the model.

'req': 'note.template',
'file': 'events.qo',
'body': {
'start': 24,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a pity we don't have the template constants defined in note-python. It might be worth adding a link to the note.template docs where these are listed.

# Setup I2C channel for communicating with the Notecard.
i2c = I2C(2)
card = notecard.OpenI2C(i2c, notecard.NOTECARD_I2C_ADDRESS, 0, debug=True)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adding app user agent info would be consistent with our other apps. An example given here blues/note-python#56

event_happening = False
add_event(card, event_start_time, event_end_time)

if time.ticks_diff(time.ticks_ms(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why we do this in code compared to say using hub.set:outbound.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reworking to use hub.set.

Copy link
Contributor

@tjvantoll tjvantoll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love it.

```
The resolution, 120x120, is deliberately low in order to keep the model small enough to run on the board's microcontroller (MCU), but feel free to tune it to your liking.
4. Click the Start button in the bottom left corner, below the Connect button. At this point, you should see a live stream of images coming in the Frame Buffer window.
5. Using the Frame Buffer, position your camera so that its looking at the roadway where you want to detect cars:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
5. Using the Frame Buffer, position your camera so that its looking at the roadway where you want to detect cars:
5. Using the Frame Buffer, position your camera so that it's looking at the roadway where you want to detect cars:


### note-python

To get the note-python files onto the MCU, use the `setup_board.py` script. First, you must identify the MCU's serial port. On Linux, it'll typically be something like `/dev/ttyACM0`. Once you have that, run `python setup_board.py <serial port>`, replacing `<serial port>` with your serial port. This script does a few things:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you must identify the MCU's serial port

Is there anything you could link to to help people with this? I would have no idea what to do.


`main.py` loops infinitely, grabbing an image from the camera and running it through your model. The output of the model is a probability value in the range [0, 1], with 0 corresponding to a 0% probability of a car being in the image and 1 corresponding to a 100% probability. If this probability exceeds a configurable threshold (discussed further below), a car "event" begins. The event ends once the probability has dropped below the threshold for 12 consecutive images.*

Once an event ends, a note is added to the [Notefile](https://dev.blues.io/api-reference/glossary/#notefile) `events.qo` in this format:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very, very trivial but “events.qo Notefile” sounds nicer than “Notefile events.qo” to me.


## Model Limitations

The underlying machine learning model is [Edge Impulse's FOMO](https://docs.edgeimpulse.com/docs/edge-impulse-studio/learning-blocks/object-detection/fomo-object-detection-for-constrained-devices). With this model, it's difficult to accurately identify multiple cars in a given frame, as FOMO will often produce multiple detections for the same car. You can see this happen in the GIF above. As such, if there are multiple cars in the frame, that will still only count as 1 event. So, you'll get a more accurate car count if you focus the camera on a narrow region where there's typically only one car passing through at a time.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the GIF above

there is no gif :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, you'll get a

Because of this, you’ll get a

Just a suggestion.

@tjvantoll
Copy link
Contributor

tjvantoll commented Jul 7, 2023

Was it necessary to train the model on non-counted objects (e.g. pedestrians and cyclists) so these don't produce false positives?

I was curious about this too, so it might be worth having a sentence to address it somewhere in the README.

@haydenroche5
Copy link
Contributor Author

Was it necessary to train the model on non-counted objects (e.g. pedestrians and cyclists) so these don't produce false positives?

I was curious about this too, so it might be worth having a sentence to address it somewhere in the README.

If you set the confidence threshold sufficiently high, I didn't notice a need for explicit training on non-cars. But yes, if you want your model to be more robust, you would add in that sort of confounding data during training. I'll briefly mention this.

@haydenroche5
Copy link
Contributor Author

Thanks for the reviews. I need to make sure the latest push works ok with the actual set up, as there are some notable changes. I'll hold off merging until that's done.

@m-mcgowan
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants