Skip to content

Commit 759b2d4

Browse files
committed
Add tests + AddGraphProgram tool + various fixes
1 parent 8b0ffff commit 759b2d4

File tree

18 files changed

+403
-102
lines changed

18 files changed

+403
-102
lines changed

README.md

Lines changed: 75 additions & 40 deletions
Large diffs are not rendered by default.

docs/Core API/Graph Program.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ CREATE
172172
(answer)-[:NEXT]->(end)
173173
"""
174174

175-
main = gp.GraphProgram().from_cypher(cypher)
175+
main = gp.GraphProgram(name="main").from_cypher(cypher)
176176

177177
```
178178

docs/FAQ.md

Lines changed: 46 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,54 @@
22

33
## Frequently Asked Questions
44

5+
### Why HybridAGI?
6+
7+
We are dissatisfied with the current trajectory of agent-based systems that lack control and efficiency. Today's approach involves building React/MKRL agents that operate independently without human control, often leading to infinite loops of nonsense due to their tendency to stay within their data distribution. Multi-agent systems attempt to address this issue, but they often result in more nonsense and prohibitive costs due to the agents' chitchat. Additionally, today's agents often require fine-tuning to enhance or correct their behavior, which can be a time-consuming and complex process.
8+
9+
With HybridAGI, the only thing you need to do is modify the behavior graph (the graph programs). We believe that fine-tuning should be a last resort when in-context learning fails to yield the desired results. By rooting cognitive sciences into computer science concepts, we empower programmers to build the agent system of their dreams by controlling the sequence of action and decision. Our goal is to build an agent system that can solve real-world problems by using an intermediary language that is interpretable by both humans and machines. If we want to keep humans in the loop in the coming years, we need to design agent systems for that purpose.
10+
511
### What is the difference between LangGraph and HybridAGI?
612

7-
TODO
13+
LangGraph is built on top of LangChain, which was also the case for HybridAGI last year. However, given the direction of the LangChain team towards encouraging ReACT agents that lack control and explainability, we switched to DSPy, which provides better value by focusing on pipelines optimization. Recently, LangGraph has emerged to compensate for the poor decision-making of LangChain, but we had already proven the value of our work. Moreover, LangGraph, like many agentic frameworks, describes a static finite state machine. Our vision of AGI systems is that being Turing complete is required, which is the case for many agentic frameworks, but having the capability of programming itself on the fly (meaning real continuous learning) is also required to truly begin the AGI journey, which is lacking in other frameworks.
814

915
### What is the difference between Llama-Index and HybridAGI?
1016

11-
TODO
17+
Llama-Index recently released an event-driven agent system, similar to LangGraph, it is a static state machine, and the same remarks apply to their work.
18+
19+
### What is the difference between DSPy and HybridAGI?
20+
21+
HybridAGI is built on top of the excellent work of the DSPy team, and it is intended as an abstraction to simplify the creation of complex DSPy programs in the context of LLM Agents. DSPy is more general and is also used for simpler tasks that don't need agentic systems. Unlike DSPy, our programs are not static but dynamic and can adapt to the user query by dynamically calling programs stored in memory. Moreover, we focus our work on explainable neuro-symbolic AGI systems using Graphs. The graph programs are easier to build than implementing them from scratch using DSPy. If DSPy is the PyTorch of LLM applications, think of HybridAGI as the Keras or HuggingFace of neuro-symbolic LLM agents.
22+
23+
### What is the difference between OpenAI o1 and HybridAGI?
24+
25+
OpenAI o1 and HybridAGI share many common goals, but they are built with different paradigms in mind. Like OpenAI o1, HybridAGI uses multi-step inferences and is a goal-oriented agent system. However, unlike OpenAI o1, we guide the CoT trace of our agent system instead of letting it explore freely its action space, a paradigm more similar to an A* where the Agent navigates in a defined graph instead of a Q-learning one. This results in more efficient reasoning, as experts can program it to solve a particular use case. We can use smaller LLMs, reducing the environmental impact and increasing the ROI. The downside of our technology is that you need expert knowledge in your domain as well as in programming and AI systems to best exploit its capabilities. For that reason, we provide audit, consulting, and development services to people and companies that lack the technical skills in AI to implement their system.
26+
27+
### Who are we?
28+
29+
We're not based in Silicon Valley or part of a big company; we're a small, dedicated team from the south of France. Our focus is on delivering an AI product where the user maintains control. We're dissatisfied with the current trajectory of agent-based products. We are experts in human-robot interactions and building interactive systems that behave as expected. While we draw inspiration from cognitive sciences and symbolic AI, we aim to keep our concepts grounded in computer science for a wider audience.
30+
31+
Our mission extends beyond AI safety and performance; it's about shaping the world we want to live in. Even if programming becomes obsolete in 5 or 10 years, replaced by some magical prompt, we believe that traditional prompts are insufficient for preserving jobs. They're too simplistic and fail to accurately convey intentions.
32+
33+
In contrast, programming each reasoning step demands expert knowledge in prompt engineering and programming. Surprisingly, it's enjoyable and not that difficult for programmers, as it allows you to gain insight into how AI truly operates by controlling it. Natural language combined with algorithms opens up endless possibilities. We can't envision a world without it.
34+
35+
### How do we make money?
36+
37+
We are providing audit, consulting, and development services for businesses that want to implement neuro-symbolic AI solutions in various domains, from computer vision to high-level reasoning with knowledge graph/ontology systems in critical domains like health, biology, financial, aerospace, and many more.
38+
39+
HybridAGI is a research project to showcase our capabilities but also to bring our vision of safe AGI systems for the future. We are a bootstrapped start-up that seeks real-world use cases instead of making pretentious claims to please VCs and fuel the hype.
40+
41+
Because our vision of LLMs capabilities is more moderate than others, we are actively looking to combine different fields of AI (evolutionary, symbolic, and deep learning) to make this leap into the future without burning the planet by relying on scaling alone. Besides the obvious environmental impacts, by relying on small/medium models, we have a better understanding and the capability to make useful research without trillion-worth datacenters.
42+
43+
HybridAGI is our way to be prepared for that future and at the same time, showcase our understanding of modern and traditional AI systems. HybridAGI is the proof that you don't need billion of dollars to work on AGI systems, and that a small team of passionate people can make the difference.
44+
45+
### Why did we release our work under GNU GPL?
46+
47+
We released HybridAGI under GNU GPL for various reasons, the first being that we want to protect our work and the work of our contributors. The second reason is that we want to build a future for people to live in, without being dependent on Big AI tech companies, we want to empower people not enslave them by destroying the market and leaving people jobless without a way to become proprietary of their knowledge. HybridAGI is a community project, by the community, for the community. Finally, HybridAGI is a way to connect with talented and like-minded people all around the world and create a community around a desirable future.
48+
49+
### Is HybridAGI just a toolbox?
50+
51+
Some could argue that HybridAGI is just a toolbox. However, unlike LangChain or Llama-Index, HybridAGI has been designed from the ground up to work in synergy with a special-purpose LLM trained on our DSL/architecture. We have enhanced our software thanks to the community and because we are the ones who created our own programming language, we are also the best people to program it. We have accumulated data and learned many augmentation techniques and cleaned our datasets during the last year of the project to keep our competitive advantage. We might release the LLM we are building at some point in time when we decide that it is beneficial for us to do so.
52+
53+
### Can I use HybridAGI commercially?
54+
55+
Our software is released under GNU GPL license to protect ourselves and the contributions of the community. The logic of your application being separated (the graph programs) there is no IP problem for you to use HybridAGI. Moreover, when used in production, you surely want to make a FastAPI server to request your agent and separate the backend and frontend of your app (like a website), so the GPL license doesn't contaminate the other pieces of your software. We also provide dual-licensing for our clients if needed.

docs/Modules API/Agents/Graph Program Interpreter.md

Lines changed: 0 additions & 3 deletions
This file was deleted.
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# Graph Interpreter Agent
2+
3+
The `GraphInterpreterAgent` is the agent system that execute the Cypher software stored in memory, it can branch over the graph programs by asking itself question when encountering decision steps, and use tools when encountering an Action step and jump to other programs when encountering Program steps.
4+
5+
## Usage
6+
7+
```python
8+
from hybridagi.modules.agents import GraphInterpreterAgent
9+
from hybridagi.core.datatypes import AgentState
10+
from hybridagi.modules.agents.tools import PredictTool, SpeakTool
11+
12+
agent_state = AgentState()
13+
14+
tools = [
15+
PredictTool(),
16+
SpeakTool(
17+
agent_state = agent_state,
18+
)
19+
]
20+
21+
agent = GraphInterpreterAgent(
22+
agent_state = agent_state, # The agent state
23+
program_memory = program_memory, # The program memory where the graph programs are stored
24+
embeddings = None, # The embeddings to use when storing the agent steps (optional, default to None)
25+
trace_memory = None, # The trace memory to store the agent steps (optional, default to None)
26+
tools = tools, # The list of tools to use for the agent
27+
entrypoint = "main" # The entrypoint for the graph programs (default to main)
28+
num_history = 5, # The number of last steps to remember in the agent context (Default to 5)
29+
commit_decision_steps = False, # Weither or not to use the decision steps in the agent context (default to False)
30+
decision_lm = None, # The decision language model to use if different from the one configured (optional, default to None)
31+
verbose = True, # Weither or not to display the colorful trace when executing the program (default to True)
32+
debug = False, # Weither or not to raise exceptions during the execution of a program (default to False)
33+
)
34+
35+
result = agent(Query(text="What is the capital of France?"))
36+
37+
```
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
2+
3+
The `AskUser` Tool is usefull to ask information to the user and
4+
5+
## Output
6+
7+
```python
8+
```
9+
10+
## Usage
11+
12+
```python
13+
14+
ask_user = AskUserTool(
15+
name = "AskUser" # The name of the tool
16+
agent_state = agent_state, # The state of the agent
17+
simulated = True, # Weither or not to simulate the user using a LLM
18+
func = None, # Callable function to integrate with front-end (optional)
19+
lm =
20+
)
21+
```
22+
23+
### Integrate it with Gradio
24+
25+
TODO

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Welcome to HybridAGI documentation, you will find the ressources to understand a
44

55
### LLM Agent as Graph VS LLM Agent as Graph Interpreter
66

7-
What makes our approach different from Agent as Graph is the fact that our Agent system is *not a static finite state machine*, but an interpreter that can read/write and execute node by node a *dynamic graph data (the graph programs) structure separated from that process*. Making possible for the Agent to learn by executing, reading and modifying the graph programs (like any other data), in its essence HybridAGI is intended to be a self-programming system centered around the Cypher language.
7+
What makes our approach different from Agent as Graph (like LangGraph or LLama-Index) is the fact that our Agent system is *not a static finite state machine*, but an interpreter that can read/write and execute node by node a *dynamic graph data* (the graph programs) structure separated from that process. Making possible for the Agent to learn by executing, reading and modifying the graph programs (like any other data), in its essence HybridAGI is intended to be a self-programming system centered around the Cypher language.
88

99
## Install
1010

hybridagi/core/graph_program.py

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -42,15 +42,15 @@ class Action(BaseModel):
4242
tool: str = Field(description="The tool name")
4343
purpose: str = Field(description="The action purpose")
4444
prompt: Optional[str] = Field(description="The prompt used to infer to tool inputs")
45-
inputs: Optional[List[str]] = Field(description="The input for the prompt", default=[])
46-
output: Optional[str] = Field(description="The variable to store the action output", default=None)
45+
var_in: Optional[List[str]] = Field(description="The list of input variables for the prompt", default=[])
46+
var_out: Optional[str] = Field(description="The variable to store the action output", default=None)
4747
disable_inference: bool = Field(description="Weither or not to disable the inference", default=False)
4848

4949
class Decision(BaseModel):
5050
id: str = Field(description="Unique identifier for the step")
5151
purpose: str = Field(description="The decision purpose")
5252
question: str = Field(description="The question to assess")
53-
inputs: Optional[List[str]] = Field(description="The input prompt variables", default=[])
53+
var_in: Optional[List[str]] = Field(description="The list of input variables for the prompt", default=[])
5454

5555
class Program(BaseModel):
5656
id: str = Field(description="Unique identifier for the step")
@@ -296,8 +296,8 @@ def from_cypher(self, cypher_query: str) -> Optional["GraphProgram"]:
296296
purpose=step_props["purpose"],
297297
tool=step_props["tool"],
298298
prompt=step_props["prompt"],
299-
inputs=step_props["inputs"] if "inputs" in step_props else [],
300-
output=step_props["output"] if "output" in step_props else None,
299+
var_in=step_props["var_in"] if "var_in" in step_props else [],
300+
var_out=step_props["var_out"] if "var_out" in step_props else None,
301301
disable_inference=True if "disable_inference" in step_props else False,
302302
))
303303
elif step_type == "Decision":
@@ -311,7 +311,7 @@ def from_cypher(self, cypher_query: str) -> Optional["GraphProgram"]:
311311
id=step_props["id"],
312312
purpose=step_props["purpose"],
313313
question=step_props["question"],
314-
inputs=step_props["inputs"] if "inputs" in step_props else [],
314+
var_in=step_props["var_in"] if "var_in" in step_props else [],
315315
))
316316
elif step_type == "Program":
317317
if "id" not in step_props:
@@ -364,10 +364,10 @@ def to_cypher(self):
364364
}
365365
if step.prompt:
366366
args["prompt"] = step.prompt
367-
if step.inputs and len(step.inputs) > 0:
368-
args["inputs"] = step.inputs
369-
if step.output:
370-
args["output"] = step.output
367+
if step.var_in and len(step.var_in) > 0:
368+
args["var_in"] = step.var_in
369+
if step.var_out:
370+
args["var_out"] = step.var_out
371371
if step.disable_inference is True:
372372
args["disable_inference"] = True
373373
cleaned_args = re.sub(key_quotes_regex, sub_regex, json.dumps(args, indent=2))
@@ -378,8 +378,8 @@ def to_cypher(self):
378378
"purpose": step.purpose,
379379
"question": step.question,
380380
}
381-
if len(step.inputs) > 0:
382-
args["inputs"] = step.inputs
381+
if len(step.var_in) > 0:
382+
args["var_in"] = step.var_in
383383
cleaned_args = re.sub(key_quotes_regex, sub_regex, json.dumps(args, indent=2))
384384
cypher += f"\n({step_id}:Decision "+cleaned_args+"),"
385385
elif isinstance(step, Program):
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# TODO
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# TODO

0 commit comments

Comments
 (0)