Lorenzejay/byoa#776
Conversation
…nd added to the crew agent doc page
gvieira
left a comment
There was a problem hiding this comment.
Happy to go through the review, but I think we can connect to discuss the idea of a single Agent interface that would prevent us from doing if agent is this and if agent is that.
| def __init__(__pydantic_self__, **data): | ||
| config = data.pop("config", {}) | ||
| super().__init__(**config, **data) | ||
|
|
There was a problem hiding this comment.
If we aren't going to make gpt-4o the default llm for everything. We need to add a model_validator that checks to make sure the agent has an LLM and isn't none.
There was a problem hiding this comment.
I agree. Other agents may not be supporting 4o yet. but I do think a validator for llm not being none is best.
| # tentatively try to import from crewai_tools import BaseTool as CrewAITool | ||
| tools_list = [] | ||
| try: | ||
| # tentatively try to import from crewai_tools import BaseTool as CrewAITool |
There was a problem hiding this comment.
note to me: from crewai_tools import BaseTool as CrewAITool
| if not self._rpm_controller: | ||
| self._rpm_controller = rpm_controller | ||
| self.create_agent_executor() | ||
| def format_log_to_str( |
There was a problem hiding this comment.
done abstractmethod set
| return tools | ||
|
|
||
| return copied_agent | ||
| def get_output_converter(self, llm, text, model, instructions): |
There was a problem hiding this comment.
Should this be a ABC in BaseAgent as well?
74bf476 to
680f17c
Compare
There was a problem hiding this comment.
None of these attributes exist on this class. You need to move these attributes over from executor.py and delete them over there.
There was a problem hiding this comment.
Need to move over crew to this class instead of executor.py.
There was a problem hiding this comment.
Need to move over crew_agent from executor.py.
There was a problem hiding this comment.
main does not have these changes. think we can this later ?
|
|
||
| self._logger.log("debug", f"[{manager.role}] Task output: {task_output}") | ||
|
|
||
| if hasattr(task.agent, "_token_process"): |
There was a problem hiding this comment.
WHy do we have to do this check in hierarchical but not sequential?
There was a problem hiding this comment.
attachment of an agent and task not required will give NoneTypes
There was a problem hiding this comment.
heirarchial does not need an agent in task
| # type: ignore # Item "None" of "Agent | None" has no attribute "function_calling_llm" | ||
| llm = self.agent.function_calling_llm or self.agent.llm | ||
|
|
||
| llm = getattr(self.agent, "function_calling_llm", None) or self.agent.llm |
There was a problem hiding this comment.
To prevent us from making these checks. Maybe we could update the base_agent to have a function_calling_llm but it would be set to None.
* better spacing * works with llama index * works on langchain custom just need delegation to work * cleanup for custom_agent class * works with different argument expectations for agent_executor * cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page * removed code examples for langchain + llama index, added to docs instead * added key output if return is not a str for and added some tests * added hinting for CustomAgent class * removed pass as it was not needed * closer just need to figuire ou agentTools * running agents - llamaindex and langchain with base agent * some cleanup on baseAgent * minimum for agent to run for base class and ensure it works with hierarchical process * cleanup for original agent to take on BaseAgent class * Agent takes on langchainagent and cleanup across * token handling working for usage_metrics to continue working * installed llama-index, updated docs and added better name * fixed some type errors * base agent holds token_process * heirarchail process uses proper tools and no longer relies on hasattr for token_processes * removal of test_custom_agent_executions * this fixes copying agents * leveraging an executor class for trigger llamaindex agent * llama index now has ask_human * executor mixins added * added output converter base class * type listed * cleanup for output conversions and tokenprocess eliminated redundancy * properly handling tokens * simplified token calc handling * original agent with base agent builder structure setup * better docs * no more llama-index dep * cleaner docs * test fixes * poetry reverts and better docs * base_agent_tools set for third party agents * updated task and test fix

this ticket aims to bring your own AI agent whether its a custom langchain one or one like llamaindex and still work with crew or any custom agent.
BaseAgentas an abstraction to adhere to the crew flow.