Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
ca541b6
prepare for rewrite
HyperCodec Nov 21, 2024
c8caec1
setup basic topology structure
HyperCodec Nov 21, 2024
78fa730
create a neural net cache type
HyperCodec Nov 21, 2024
fb8c61f
register input counts while generating network
HyperCodec Nov 21, 2024
6596544
implement predict
HyperCodec Nov 21, 2024
4b77383
make rayon feature default
HyperCodec Nov 21, 2024
8c32b6c
better serde support
HyperCodec Nov 21, 2024
19705e9
cargo fmt
HyperCodec Nov 21, 2024
562021a
small use change
HyperCodec Nov 21, 2024
62b8f87
remove useless explicit Sync declarations
HyperCodec Nov 21, 2024
07318f8
little rayon changes
HyperCodec Nov 21, 2024
1da2e50
better support for activation fns
HyperCodec Nov 21, 2024
d88fba6
docfixes
HyperCodec Nov 21, 2024
c8b6e3a
use activation scopes in creation of neural network
HyperCodec Nov 21, 2024
0d31dac
start helper functions for division reproduction
HyperCodec Nov 22, 2024
6127b85
implement a connection type for ease of use
HyperCodec Nov 22, 2024
7ccd25f
more derives for connection
HyperCodec Nov 22, 2024
a73c1e7
change to bi-directional neuron linking
HyperCodec Nov 22, 2024
5c42be5
cargo fmt
HyperCodec Nov 22, 2024
d692c5b
implement safe add_connection
HyperCodec Nov 22, 2024
37ee247
revert bi-drirectional edges
HyperCodec Nov 22, 2024
333e2f0
implement mutate_weight
HyperCodec Nov 22, 2024
b2cc29f
extract activation impls into separate module (#51)
HyperCodec Nov 22, 2024
bc18011
start implementing division reproduction
HyperCodec Nov 22, 2024
b112a2f
implement weights mutation and move code to the correct trait
HyperCodec Nov 25, 2024
a11eb18
change to CONTRIBUTING
HyperCodec Nov 25, 2024
f8c896c
make a basic test
HyperCodec Nov 25, 2024
8b5d2c3
finish implementing connection downshifting (untested)
HyperCodec Nov 25, 2024
d9c5f16
add an assertion in breaking code
HyperCodec Nov 25, 2024
99f9054
change input layer activation to linear
HyperCodec Nov 26, 2024
34c27c3
fix index out of bounds error
HyperCodec Jan 24, 2025
4db74df
rename activation builtins module
HyperCodec Jan 24, 2025
508767f
make the yield send to the back of the task queue to prevent a potent…
HyperCodec Jan 29, 2025
68c9c51
add bias arg to NeuronCache::new
HyperCodec Jan 29, 2025
d6a8cbe
cargo fmt
HyperCodec Jan 29, 2025
5e78ec4
fix neural net initialization that was causing a hang
HyperCodec Jan 31, 2025
080b747
change back to yield_now
HyperCodec Jan 31, 2025
434c02a
implement crossover reproduction
HyperCodec Feb 3, 2025
568aeaf
cargo fmt
HyperCodec Feb 3, 2025
3e1aa2d
make workflow run in more scenarios
HyperCodec Feb 3, 2025
8b04338
enable aggressive doc warnings
HyperCodec Feb 3, 2025
8c7b4af
create serde test
HyperCodec Feb 3, 2025
d306312
document neuralnet.rs
HyperCodec Feb 3, 2025
72988dd
panic when activations is empty in new_with_activations
HyperCodec Feb 3, 2025
1ad50fc
small comment
HyperCodec Feb 3, 2025
8caf823
cargo fmt
HyperCodec Feb 3, 2025
12518af
small input change
HyperCodec Feb 3, 2025
1c131c9
document lib.rs
HyperCodec Feb 3, 2025
f57131b
document activation.rs
HyperCodec Feb 3, 2025
2628768
small doc changes to activation builtin
HyperCodec Feb 3, 2025
92749c2
fix traits in README
HyperCodec Feb 3, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci-cd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: CI-CD

on:
push:
branches: [main]
branches: [main, dev]
pull_request:

jobs:
Expand Down
5 changes: 3 additions & 2 deletions CONTRIBUTING
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Thanks for contributing to this project.

To get started, check out the [issues page](https://github.com/inflectrix/neat). You can either find a feature/fix from there or start a new issue, then begin implementing it in your own fork of this repo.
To get started, check out the [issues page](https://github.com/hypercodec/neat). You can either find a feature/fix from there or start a new issue, then begin implementing it in your own fork of this repo.

Once you are done making the changes you'd like the make, start a pull request to the [dev](https://github.com/inflectrix/neat/tree/dev) branch. State your changes and request a review. After all branch rules have been satisfied, someone with management permissions on this repository will merge it.
Once you are done making the changes you'd like the make, start a pull request to the [dev](https://github.com/hypercodec/neat/tree/dev) branch. State your changes and request a review. After all branch rules have been satisfied and the pull request has a valid reason, someone with management permissions on this repository will merge it.

You could also make a draft PR while implementing your features if you want feedback or discussion before finalizing your changes.
78 changes: 41 additions & 37 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

27 changes: 12 additions & 15 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ name = "neat"
description = "Crate for working with NEAT in rust"
version = "0.5.1"
edition = "2021"
authors = ["Inflectrix"]
repository = "https://github.com/inflectrix/neat"
homepage = "https://github.com/inflectrix/neat"
authors = ["HyperCodec"]
repository = "https://github.com/HyperCodec/neat"
homepage = "https://github.com/HyperCodec/neat"
readme = "README.md"
keywords = ["genetic", "machine-learning", "ai", "algorithm", "evolution"]
categories = ["algorithms", "science", "simulation"]
Expand All @@ -18,22 +18,19 @@ rustdoc-args = ["--cfg", "docsrs"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[features]
default = ["max-index"]
crossover = ["genetic-rs/crossover"]
rayon = ["genetic-rs/rayon", "dep:rayon"]
max-index = []
default = []
serde = ["dep:serde", "dep:serde-big-array"]


[dependencies]
bitflags = "2.5.0"
genetic-rs = { version = "0.5.1", features = ["derive"] }
lazy_static = "1.4.0"
rand = "0.8.5"
rayon = { version = "1.8.1", optional = true }
serde = { version = "1.0.197", features = ["derive"], optional = true }
atomic_float = "1.1.0"
bitflags = "2.8.0"
genetic-rs = { version = "0.5.4", features = ["rayon", "derive"] }
lazy_static = "1.5.0"
rayon = "1.10.0"
replace_with = "0.1.7"
serde = { version = "1.0.217", features = ["derive"], optional = true }
serde-big-array = { version = "0.5.1", optional = true }

[dev-dependencies]
bincode = "1.3.3"
serde_json = "1.0.114"
serde_json = "1.0.138"
95 changes: 5 additions & 90 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,102 +1,17 @@
# neat
[<img alt="github" src="https://img.shields.io/github/last-commit/inflectrix/neat" height="20">](https://github.com/inflectrix/neat)
[<img alt="github" src="https://img.shields.io/github/last-commit/hypercodec/neat" height="20">](https://github.com/hypercodec/neat)
[<img alt="crates.io" src="https://img.shields.io/crates/d/neat" height="20">](https://crates.io/crates/neat)
[<img alt="docs.rs" src="https://img.shields.io/docsrs/neat" height="20">](https://docs.rs/neat)

Implementation of the NEAT algorithm using `genetic-rs`.

### Features
- rayon - Uses parallelization on the `NeuralNetwork` struct and adds the `rayon` feature to the `genetic-rs` re-export.
- serde - Adds the NNTSerde struct and allows for serialization of `NeuralNetworkTopology`
- crossover - Implements the `CrossoverReproduction` trait on `NeuralNetworkTopology` and adds the `crossover` feature to the `genetic-rs` re-export.
- serde - Implements `Serialize` and `Deserialize` on most of the types in this crate.

*Do you like this repo and want to support it? If so, leave a ⭐*
*Do you like this crate and want to support it? If so, leave a ⭐*

### How To Use
When working with this crate, you'll want to use the `NeuralNetworkTopology` struct in your agent's DNA and
the use `NeuralNetwork::from` when you finally want to test its performance. The `genetic-rs` crate is also re-exported with the rest of this crate.

Here's an example of how one might use this crate:
```rust
use neat::*;

#[derive(Clone, RandomlyMutable, DivisionReproduction)]
struct MyAgentDNA {
network: NeuralNetworkTopology<1, 2>,
}

impl GenerateRandom for MyAgentDNA {
fn gen_random(rng: &mut impl rand::Rng) -> Self {
Self {
network: NeuralNetworkTopology::new(0.01, 3, rng),
}
}
}

struct MyAgent {
network: NeuralNetwork<1, 2>,
// ... other state
}

impl From<&MyAgentDNA> for MyAgent {
fn from(value: &MyAgentDNA) -> Self {
Self {
network: NeuralNetwork::from(&value.network),
}
}
}

fn fitness(dna: &MyAgentDNA) -> f32 {
// agent will simply try to predict whether a number is greater than 0.5
let mut agent = MyAgent::from(dna);
let mut rng = rand::thread_rng();
let mut fitness = 0;

// use repeated tests to avoid situational bias and some local maximums, overall providing more accurate score
for _ in 0..10 {
let n = rng.gen::<f32>();
let above = n > 0.5;

let res = agent.network.predict([n]);
let resi = res.iter().max_index();

if resi == 0 ^ above {
// agent did not guess correctly, punish slightly (too much will hinder exploration)
fitness -= 0.5;

continue;
}

// agent guessed correctly, they become more fit.
fitness += 3.;
}

fitness
}

fn main() {
let mut rng = rand::thread_rng();

let mut sim = GeneticSim::new(
Vec::gen_random(&mut rng, 100),
fitness,
division_pruning_nextgen,
);

// simulate 100 generations
for _ in 0..100 {
sim.next_generation();
}

// display fitness results
let fits: Vec<_> = sim.entities
.iter()
.map(fitness)
.collect();

dbg!(&fits, fits.iter().max());
}
```
# How To Use
TODO

### License
This crate falls under the `MIT` license
Loading