Highlights
- Pro
Stars
Find your Claude Code level (0-10) and get a personalized roadmap to the next one. A skill for Claude Code by the GenAI Circle community.
AI Agent as a Pinix Clip — agentic loop with memory, tools, and vision
"🐈 nanobot: The Ultra-Lightweight OpenClaw"
A lightweight inference engine supporting speculative speculative decoding (SSD).
SPECTRE is an agentic Coding Workflow - /Scope, /Plan, /Execute, /Clean, /Test, /Rebase, /Evaluate - that uses simply step by step product development workflow to generate high quality results from…
A text-grid web renderer for AI agents — see the web without screenshots
The professional development environment for Claude Code. Tests enforced, context preserved, quality automated.
OpenClaw in 815 lines: a personal AI assistant where every capability is a Markdown file.
Mobile and Web client for Codex and Claude Code, with realtime voice, encryption and fully featured
Z.E.T.A. Zero: Cognitive Construct & Persistent Memory for Local LLMs
Evaluate OMR sheets fast and accurately using a scanner 🖨 or your phone 🤳.
A rich terminal UI for GitHub that doesn't break your flow.
An agentic skills framework & software development methodology that works.
malik-na / omarchy-mac
Forked from basecamp/omarchyOpinionated Arch/Hyprland Setup for Apple Silicon Macs M1/M2
Repo for Morgan Stanley Machine Learning Research group's publications
Automation engine to build, test and ship any codebase. Runs locally, in CI, or directly in the cloud
Run Claude Code, Gemini, Codex — or any coding agent — in a clean, isolated sandbox with sensitive data redaction and observability baked in.
Development environments for coding agents. Enable multiple agents to work safely and independently with your preferred stack.
FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI…
TagSpaces is an offline, open source, document manager with tagging support
[NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)
Fast parallel LLM inference for MLX
CLIP+MLP Aesthetic Score Predictor
Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
A TTS model capable of generating ultra-realistic dialogue in one pass.




