Skip to content

Requirements Gathering #1

@hinerm

Description

@hinerm

Prior efforts: scijava/script-editor#70, https://github.com/ctrueden/imagej2-assistant/tree/main

Safety & Guardrails

  • Content filtering for generated code
  • Rate limiting/API cost indicators. Conversation history. Context/instruction limits.

User Experience

  • Onboarding: First-time user tutorial/prompts for common tasks
  • Persona customization
  • Templates: Pre-built conversation starters ("Help me segment cells", "Create a batch processing workflow")
  • Explanations: LLM should provide targeted explanations of generated code
  • Do we want to support multi-step responses? I feel like creating outlines and doing thing step-by-step could perhaps be decoupled? (e.g. it is terrible in Gemini)
  • Claude code, copilot, etc can generate workspace instruction files based on what's imported in the code. This is an area to explore.

API Keys

  • API keys need to be encrypted, ideally with a single password or platform unlock mechanism
  • Offer Anthropic, OpenAI, Gemini at least
  • API key entry should have links to the places to obtain those API keys

Model selection

Context selection

  • I personally like Gemini and VS Studio/Copilot's concepts of building context: provide UI widgets to add elements to the LLM context when chatting
  • What does the LLM have access to: Image metadata? All commands loaded in Fiji? But not all windows/images open unless explicitly selected for context? Do we scrape plugin index, READMEs, etc... or just generate instruction files?
  • Access to the scijava context to gather what plugins are available?
  • Runtime inspection of cached method signatures? Classes?

Data Privacy

  • Warn users if image data will be sent to API
  • PHI/PII handling: Healthcare/sensitive data considerations
  • Opt in to collecting any context at all

Technical Architecture

  • Streaming responses: Don't block UI while waiting for LLM
  • What happens when API is down/unreachable or out of date?
  • Update mechanism: How to update rulesets/prompts without new releases? Separate from persona?
  • Separation of concerns: GUI elements should be separated from LLM management
  • Custom plugin documentation: Allow plugin developers to provide LLM context

Chat interaction

  • Need a dedicated chat window. Probably not the search bar but a new window for preserving history
  • Debugging assistance: Not just runtime errors, but logic errors too
  • Code review: LLM critiques user scripts for best practices
  • I wonder how much we can link agentic actions to history? That seems like it would be awesome, to undo, redo, go back in time, etc... probably requires fundamental changes to scijava/imagej 🙃

Script integration

  • The LLM needs to be able to open the script editor and generate scripts in response to chat prompts
  • Needs an awareness of script languages
  • Further, I love the Colab functionality of Gemini's awareness of execution failures. I think users should have easy mechanisms for linking their script runtime failures to LLM feedback.
  • It would be nice to edit parts of a script

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions