70 min read

[AI Dev Tools] All Popular GitHub Projects for LLMs in Software Engineering

[AI Dev Tools] All Popular GitHub Projects for LLMs in Software Engineering
Source: https://github.com/All-Hands-AI/OpenHands

This post was created by pulling GitHub projects related to the use of Large Language Models in software engineering, each with 500 or more stars. The projects were categorized by an LLM, which also generated their description.

screenshot-to-code: Convert Screenshots and Designs into Functional Code

A tool that transforms screenshots, mockups, and Figma designs into clean, functional code using LLMs. It supports multiple frameworks and offers experimental video-to-code functionality.

Key Features
  • Converts screenshots, mockups, and Figma designs into code for various stacks, including HTML + Tailwind, React + Tailwind, Vue + Tailwind, and more.
  • Supports multiple LLMs, including Claude Sonnet 3.5 and GPT-4o, for generating code and DALL-E 3 or Flux Schnell for image generation.
  • Experimental feature to convert video or screen recordings of websites into functional prototypes.
  • Includes a React/Vite frontend and FastAPI backend, with options to run locally or via Docker.
  • For example, you can replicate a New York Times webpage or an Instagram page design into functional code.
Stars: 67236 ⭐
Source: https://github.com/abi/screenshot-to-code

GPT-Engineer: Code Generation and Improvement Platform

An experimental platform that enables software specification in natural language and automated code generation through LLMs.

Key Features:
  • Generates code based on natural language specifications provided in a 'prompt' file, with real-time execution and feedback
  • Improves existing codebases through an interactive mode that implements requested changes based on natural language instructions
  • Supports image inputs for vision-capable models, allowing UX and architecture diagrams as additional context
  • Includes a benchmarking tool for testing custom agent implementations against popular datasets like APPS and MBPP
  • Offers customizable agent behavior through editable preprompts that persist between projects
  • Example use: The platform can create a new project from scratch by reading instructions from a prompt file and automatically generating and executing the corresponding code
Stars: 52923 ⭐
Source: https://github.com/AntonOsika/gpt-engineer

OpenHands: AI-Powered Software Development Agents

OpenHands is a platform for AI-driven software development agents that can perform tasks like modifying code, running commands, browsing the web, and calling APIs, mimicking human developer capabilities.

Key Features
  • Agents can execute a wide range of development tasks, including code modification, command execution, web browsing, and API interactions.
  • Supports integration with local filesystems, headless mode for scripting, and a CLI for flexible usage.
  • Designed for single-user local workstation use, with advanced deployment options available for multi-tenant environments upon request.
  • Includes a GitHub Action for running on tagged issues, enabling automated workflows.
  • For example, you can use OpenHands to automate code modifications or execute commands on remote systems via an SSH agent.
Stars: 44467 ⭐
Source: https://github.com/All-Hands-AI/OpenHands

GPT Pilot: AI-Driven App Development Assistant

A development tool that collaborates with developers to create production-ready applications, handling approximately 95% of code generation while requiring human oversight for the remaining portion.

Key Features:
  • Implements a step-by-step development process using multiple specialized agents (Product Owner, Architect, Tech Lead, Developer, etc.) to handle different aspects of app creation
  • Scales to handle complex applications by intelligently filtering relevant code for each task rather than processing the entire codebase at once
  • Maintains interactive debugging and troubleshooting capabilities throughout the development process
  • Generates code incrementally with developer oversight, allowing for immediate bug fixing and adjustments
  • Provides project continuation support for adding new features to completed applications
  • Example use: The tool includes a feature for importing existing projects from v0.1, preserving previous work and settings
  • Example use: Can build various applications including web servers, databases, and user interfaces, with examples available in the project wiki
Stars: 32243 ⭐
Source: https://github.com/Pythagora-io/gpt-pilot

Tabby: Self-hosted AI Coding Assistant

An open-source alternative to GitHub Copilot that runs locally, providing code completion and assistance without relying on external cloud services.

Key Features:
  • Self-contained deployment without requiring database management systems or cloud services
  • OpenAPI interface for seamless integration with existing development environments like Cloud IDEs
  • Runs on consumer-grade GPUs and supports Apple M1/M2 Metal inference
  • Repository-aware code completion using RAG technology to understand project context
  • Built-in administrative interface with team management and usage analytics
  • Integrates with GitHub, GitLab, and various IDE extensions including VSCode, Vim, and IntelliJ
Stars: 28985 ⭐
Source: https://github.com/TabbyML/tabby

Crawl4AI: LLM-Optimized Web Crawler & Scraper

An open-source web crawler designed to generate clean, structured markdown and extract data optimized for LLMs, featuring blazing-fast performance and flexible deployment options.

Key Features:
  • Generates clean, structured markdown with accurate formatting and citations, using heuristic-based filtering and BM25 algorithms to remove noise
  • Extracts structured data using schema-based or CSS selectors, with support for dynamic content and JavaScript execution
  • Advanced browser control with session management, proxy support, and stealth mode to avoid bot detection
  • Comprehensive media support including images, audio, videos and handling of lazy-loaded content
  • Memory-adaptive dispatcher system for scaling to thousands of URLs with intelligent monitoring and rate limiting
  • Deployment ready with Docker support and API gateway for production environments
  • Example use: Extract pricing data from OpenAI's website using the LLM extraction strategy to automatically parse model names and fees into structured format
  • Example use: Generate clean markdown from documentation pages using content filtering strategies to remove irrelevant information while preserving key content
Stars: 27136 ⭐
Source: https://github.com/unclecode/crawl4ai

Aider: Terminal-Based LLM Pair Programming

A command-line tool that enables pair programming with LLMs to edit code in local git repositories, supporting both new projects and existing codebases.

Key Features:
  • Edits code directly in response to natural language requests like adding features, fixing bugs, or refactoring code
  • Automatically commits changes to git with appropriate commit messages
  • Works with multiple files simultaneously and maintains a map of the entire git repository for better context
  • Integrates with popular editors and IDEs while tracking real-time file changes
  • Supports voice coding, image sharing, and URL content interpretation
  • Compatible with most popular programming languages including Python, JavaScript, TypeScript, PHP, HTML, and CSS
  • Achieves top performance scores on SWE Bench, successfully solving real GitHub issues from major open source projects
Stars: 25487 ⭐
Source: https://github.com/Aider-AI/aider

Devika: Autonomous LLM-Powered Software Engineer

An advanced software development assistant that understands human instructions, breaks down tasks, researches information, and writes code to achieve specified objectives. The project aims to be an open-source alternative to Devin by Cognition AI.

Key Features:
  • Employs advanced planning and reasoning capabilities to break down complex coding tasks into manageable steps
  • Performs contextual keyword extraction and web browsing to gather relevant information for development tasks
  • Generates code in multiple programming languages while maintaining project organization and management
  • Offers natural language interaction through a chat interface with dynamic agent state tracking and visualization
  • Integrates with Ollama for running local LLMs alongside cloud-based models like Claude 3, GPT-4, Gemini, Mistral, and Groq
  • Uses a modular architecture allowing for feature and integration extensions
  • Example use: Creates new features, fixes bugs, or develops entire projects from scratch with minimal human guidance
Stars: 18813 ⭐
Source: https://github.com/stitionai/devika

Plandex: Terminal-Based LLM Development Assistant

A terminal-based coding agent that helps developers build features and applications by planning and implementing complex tasks across multiple files.

Key Features:
  • Executes changes in a protected sandbox environment where developers can review modifications before applying them to project files. Includes version control with branching capabilities for trying different approaches.
  • Manages project context through terminal commands, allowing developers to add individual files, directories, or glob patterns. Automatically keeps context updated with latest project state.
  • Supports Mac, Linux, FreeBSD, and Windows through a single binary with no dependencies. Windows implementation requires WSL.
  • Facilitates development tasks like building new applications, adding features to existing codebases, writing tests, understanding code, and debugging.
  • Handles token management and context control while enabling background task execution and parallel work streams.
  • Example use: Add line charts to existing components by loading relevant files and sending a prompt like "add a new line chart showing the number of foobars over time to components/charts.tsx"
Stars: 11002 ⭐
Source: https://github.com/plandex-ai/plandex

ShellGPT: Command-line Tool for Shell Commands and Code Generation using LLMs

A command-line tool that generates shell commands, code snippets, and documentation using LLMs, eliminating the need for external searches. Supports multiple operating systems and shell environments.

Key Features:
  • Generates and executes shell commands with contextual awareness of the operating system and shell environment
  • Interactive mode allows reviewing, executing, or describing generated commands before running them
  • Creates code snippets with dedicated coding mode for clean output
  • Maintains conversation history through chat sessions and REPL mode for iterative interactions
  • Function calling capability allows LLMs to execute system commands and custom functions
  • Shell integration enables direct command suggestions through keyboard shortcuts
Stars: 10163 ⭐
Source: https://github.com/TheR1D/shell_gpt

Avante.nvim: Neovim IDE Enhancement with LLM Integration

A Neovim plugin that provides LLM-powered code assistance and suggestions with direct code modification capabilities, inspired by the Cursor AI IDE.

Key Features:
  • Integrates with Claude and OpenAI APIs to provide intelligent code analysis and suggestions
  • Offers a sidebar interface for managing AI interactions and code modifications
  • Supports both full file and selected code block analysis
  • Provides customizable key bindings and commands for efficient workflow integration
  • Features template-based custom prompts through .avanterules files
  • Includes clipboard image support and project-wide codebase analysis capabilities
Stars: 9083 ⭐
Source: https://github.com/yetone/avante.nvim

AI Commits: Automated Git Commit Messages with LLMs

AI Commits is a CLI tool that automatically generates Git commit messages using LLMs, eliminating the need to manually write commit messages.

Key Features
  • Generates commit messages by analyzing code changes using `git diff` and sending them to an LLM for processing.
  • Supports generating multiple commit message recommendations for better selection.
  • Can produce commit messages following the Conventional Commits specification for projects adhering to this standard.
  • Integrates with Git via the `prepare-commit-msg` hook, allowing seamless use within existing Git workflows.
  • Configurable options include API key, locale, number of generated messages, proxy settings, model selection, timeout, and maximum message length.
Stars: 8178 ⭐
Source: https://github.com/Nutlope/aicommits

PentestGPT: LLM-Powered Penetration Testing Tool

PentestGPT is a penetration testing tool that leverages LLMs to automate and guide the penetration testing process. It operates interactively, helping testers with both overall progress and specific operations.

Key Features:
  • Automates penetration testing by guiding users through the process, providing step-by-step instructions and insights.
  • Designed to solve easy to medium HackTheBox machines and CTF challenges, with examples provided in the documentation.
  • Built on top of ChatGPT, it maintains context awareness to avoid losses of context during deeper testing phases.
  • Supports GPT-4 for optimal performance, though GPT-3.5 and local LLMs can also be used with custom configurations.
  • Includes interactive commands like `next`, `more`, `todo`, and `discuss` to navigate and manage the testing process.
  • Generates automated reports and logs for completed tests, which can be printed in a human-readable format.
Installation and Usage:
  • Install via pip: `pip3 install git+https://github.com/GreyDGL/PentestGPT`.
  • Requires an OpenAI API key with GPT-4 access for optimal performance.
  • Start the tool with `pentestgpt --reasoning_model=gpt-4-turbo` and follow the interactive prompts.
Examples:
  • For example, PentestGPT can solve HackTheBox challenges like TEMPLATED, demonstrating its ability to handle web-based vulnerabilities.
  • Another example includes testing on VulnHub machines, such as Hackable II, with a detailed process documented in the resources.
Stars: 7658 ⭐
Source: https://github.com/GreyDGL/PentestGPT

GPT-Migrate: Codebase Migration Across Frameworks and Languages

GPT-Migrate automates the migration of codebases from one framework or language to another using LLMs, reducing the complexity and cost of manual migration.

Key Features
  • Automates the migration process by creating a Docker environment for the target language, identifying dependencies, and rebuilding the codebase.
  • Supports iterative debugging, unit test creation, and validation against both the original and migrated codebases.
  • Customizable options for source and target languages, directories, entry points, and stylistic guidelines.
  • Uses a hierarchical prompt design to guide the LLM in generating, debugging, and testing code.
  • Provides a benchmark repository for testing and improving migration accuracy across various languages and frameworks.
Stars: 6904 ⭐
Source: https://github.com/joshpxyne/gpt-migrate

PR-Agent: AI-Powered Pull Request Review and Management

PR-Agent is a tool designed to streamline pull request (PR) reviews and management by providing AI-generated feedback, suggestions, and automation for GitHub, GitLab, Bitbucket, and Azure DevOps repositories.

Key Features
  • Automatically generates PR descriptions, reviews, and code improvement suggestions using LLMs.
  • Supports multiple commands like `/describe`, `/review`, `/improve`, `/ask`, and `/update_changelog` for various PR-related tasks.
  • Includes advanced features like static code analysis, custom labels, test generation, and CI feedback in the Pro version (Qodo Merge).
  • Offers a PR compression strategy to handle both short and long PRs efficiently, ensuring quick and affordable responses.
  • Provides multiple usage options, including CLI, GitHub Actions, Docker, and hosted solutions (Qodo Merge).
Example Use Cases
  • For example, you can use `/review` to get detailed feedback on a PR, including potential issues, security concerns, and review effort.
  • Another example is using `/improve` to receive actionable code suggestions for enhancing the quality of your PR.
Stars: 6553 ⭐
Source: https://github.com/qodo-ai/pr-agent

OpenCommit: Auto-Generate Meaningful Git Commit Messages

OpenCommit is a CLI tool that generates meaningful and descriptive Git commit messages using LLMs, streamlining the commit process and improving code documentation.

Key Features:
  • Automatically generates commit messages for staged changes, saving time and ensuring clarity in version control.
  • Supports customization of commit messages, including emojis, language, and message templates, to align with project conventions.
  • Can be integrated as a Git `prepare-commit-msg` hook, allowing seamless use within IDEs and source control workflows.
  • Offers a GitHub Action to automatically improve commit messages for all new commits pushed to remote repositories.
  • Configurable to work with local models like Ollama or other LLM providers, providing flexibility in deployment and cost management.
Stars: 6314 ⭐
Source: https://github.com/di-sukharev/opencommit

LLM: Command-Line Interface for Language Models

A CLI utility and Python library for interacting with LLMs through both remote APIs and locally installed models, with features for prompt execution, result storage, and embedding generation.

Key Features:
  • Executes prompts directly from the command line and stores results in SQLite databases
  • Offers local model hosting capabilities, reducing dependency on remote APIs
  • Generates and manages embeddings for advanced text processing
  • Provides system prompts for customized model instructions
  • Integrates with various plugins to extend functionality and model access
  • Includes a chat interface for interactive conversations with models
Stars: 5694 ⭐
Source: https://github.com/simonw/llm

SQL Chat: Natural Language SQL Client

A chat-based SQL client that enables database operations through natural language interactions, built with Next.js.

Key Features:
  • Communicates with databases using natural language for queries, modifications, additions, and deletions
  • Supports multiple database systems including MySQL, PostgreSQL, MSSQL, and TiDB Cloud
  • Provides an intuitive chat interface instead of traditional UI controls for database operations
  • Available as a hosted service at sqlchat.ai or as a self-hosted solution via Docker
  • Implements data privacy protections and IP whitelisting options for secure database connections
  • Example use: Access your database on the same host using 'host.docker.internal' as the connection setting when running in Docker
Stars: 4801 ⭐
Source: https://github.com/sqlchat/sqlchat

Qodo Cover: Automated Test Coverage Enhancement with LLMs

A tool that automatically generates qualified tests to increase code coverage, available both as a GitHub CI workflow and a local CLI tool.

Key Features:
  • Generates unit tests for multiple programming languages using LLMs, focusing on increasing code coverage
  • Includes a test runner to execute test suites and generate coverage reports, along with a coverage parser to validate improvements
  • Can scan entire repositories to identify test files and automatically collect context for test generation
  • Integrates with various LLM providers through LiteLLM, supporting over 100 different models
  • Available as both a Python pip package and standalone binary executable
  • Supports multiple programming environments including Python, Go, and Java, with specific coverage tools for each
  • Example use: Generate tests for a Python FastAPI application by specifying source files, test paths, and desired coverage percentage
  • Example use: Create tests for Java applications using Gradle and JaCoCo for coverage reporting
Stars: 4740 ⭐
Source: https://github.com/qodo-ai/qodo-cover

AI Shell: Natural Language to Shell Command Converter

An open-source CLI tool that translates natural language into executable shell commands, inspired by GitHub Copilot X CLI.

Key Features:
  • Converts natural language descriptions into precise shell commands with explanations of what each command does
  • Includes an interactive chat mode for engaging in conversations about technical topics and receiving code examples
  • Supports multiple languages including English, Chinese, Spanish, Japanese, and others through simple configuration
  • Offers a silent mode to skip command explanations for faster operation
  • Provides a visual config UI for managing settings like API endpoints and language preferences
  • Installation requires Node.js v14+ and an OpenAI API key, with setup done through npm installation and basic configuration
  • Example use: "ai list all log files" generates and explains the command "find . -name "*.log"" with the option to execute, revise, or cancel
Stars: 4415 ⭐
Source: https://github.com/BuilderIO/ai-shell

SQL Translator: Natural Language to SQL Conversion Tool

A bidirectional translation tool that converts between SQL queries and natural language, enabling users without SQL expertise to interact with databases using everyday language.

Key Features:
  • Translates both from natural language to SQL and SQL to natural language, with a reverse button for quick switching between modes
  • Includes schema awareness functionality in beta, enabling context-aware translations
  • Features dark mode, case toggling, copy to clipboard functionality, and SQL syntax highlighting
  • Maintains query history for reference and reuse
  • Offers both local development setup and Docker deployment options through npm or Docker Compose
  • Runs on localhost:3000 after installation, requiring only an OpenAI API key to function
Stars: 4226 ⭐
Source: https://github.com/whoiskatrin/sql-translator

ChatGPT CodeReview Bot: Automated Code Review Using LLMs

A GitHub bot that automatically performs code reviews on pull requests using LLMs to provide feedback and insights.

Key Features:
  • Automatically reviews code when pull requests are created or updated, providing feedback in PR timeline and file changes sections.
  • Integrates with GitHub Actions for flexible deployment and configuration options.
  • Supports multiple installation methods including GitHub App installation or self-hosting.
  • Configurable review settings including language preferences, custom prompts, and file pattern filtering.
  • Provides rate limiting and patch length controls to manage API usage and performance.
  • Example use: https://github.com/anc95/ChatGPT-CodeReview/pull/21 shows automated code review comments and suggestions in a pull request timeline.
Stars: 4130 ⭐
Source: https://github.com/anc95/ChatGPT-CodeReview

AICommand: Natural Language Control for Unity Editor

A proof-of-concept integration enabling Unity Editor control through natural language prompts using the ChatGPT API.

Key Features:
  • Allows controlling Unity Editor functions through natural language commands
  • Requires Unity 2022.2 or later and an OpenAI API key for functionality
  • Settings are stored in UserSettings/AICommandSettings.asset, which should be excluded when sharing projects
  • Currently experimental and inconsistent - may require multiple attempts to execute commands correctly
  • Can be tested in other projects by copying the Assets/Editor directory
Stars: 4014 ⭐
Source: https://github.com/keijiro/AICommand

code2prompt: Codebase to LLM Prompt Converter

A command-line tool that converts codebases into formatted prompts for LLM interaction, featuring source tree generation, templating, and token counting capabilities.

Key Features:
  • Transforms entire codebases into well-formatted Markdown prompts with detailed source tree structure
  • Customizable prompt generation using Handlebars templates, with multiple pre-built templates for common use cases
  • Supports file filtering through glob patterns and respects .gitignore rules
  • Tracks token count using various tokenizers (cl100k, p50k, r50k_base) for different models
  • Integrates with Git to include diff output and generate commit messages or pull requests
  • Available as both a CLI tool and Python SDK for application integration
  • Example use: code2prompt path/to/codebase --diff -t templates/write-git-commit.hbs to generate Git commit messages from staged files
  • Example use: code2prompt path/to/codebase --tokens --encoding=p50k to generate a prompt with token count using p50k encoding
Stars: 3846 ⭐
Source: https://github.com/mufeedvh/code2prompt

Code Interpreter API: Python Implementation of ChatGPT Code Interpreter

A LangChain-based implementation that enables code execution in a sandboxed environment using CodeBox as backend. Allows running Python code locally while utilizing OpenAI's API for language processing.

Key Features:
  • Processes both text and file inputs, generating corresponding outputs through analysis, charting, and image manipulation capabilities
  • Features internet access and automatic Python package installation to enhance functionality
  • Maintains conversation memory to provide context-aware responses based on previous interactions
  • Offers CodeBox API integration for production deployment and scaling
  • Example use: Analyzing and plotting Bitcoin charts with simple code: "Plot the bitcoin chart of year 2023"
  • Example use: Processing datasets with file attachments, as demonstrated with the Iris dataset analysis that generates visualizations and insights
Stars: 3816 ⭐
Source: https://github.com/shroominic/codeinterpreter-api

Copilot for Obsidian: An Open-source LLM Interface for Note Management

A minimalistic LLM interface integrated directly into Obsidian, offering chat functionality, vault-wide questioning, and custom prompt capabilities while maintaining privacy through local data storage.

Key Features:
  • Chat interface embedded in Obsidian with support for API-based and local models through LM Studio or Ollama
  • Built-in commands for text manipulation including simplification, translation, grammar fixes, and tone changes
  • Local index-powered vault questioning system that provides cited responses without sending data to cloud services
  • Custom prompt creation and management stored locally as markdown files
  • Relevant Notes feature displays and ranks related content based on hybrid search and note graph similarity
  • Ability to save and load entire conversations to notes, with options for autosaving chat history
  • Copilot Plus mode (in alpha) provides advanced AI agent capabilities for personal knowledge management
Stars: 3641 ⭐
Source: https://github.com/logancyang/obsidian-copilot

Micro Agent: Automated Code Generation with Test-Driven Development

A specialized agent that generates and iteratively improves code until it passes predefined test cases. Unlike broader coding agents, it focuses specifically on writing tests and producing code that satisfies them.

Key Features:
  • Creates definitive test cases and iterates on code until all tests pass, maintaining a focused and controlled development process
  • Supports both unit test matching and experimental visual matching modes for code generation based on design specifications
  • Integrates with Visual Copilot for direct Figma connection, enabling precise component reuse and design token mapping
  • Installation requires Node.js v18+ and can be done via npm global installation
  • Example use: Run interactive mode with simple command "micro-agent" for guided code generation
  • Example use: Generate code for specific files with "micro-agent ./file-to-edit.ts -t 'npm test'" to automatically iterate until tests pass
Stars: 3626 ⭐
Source: https://github.com/BuilderIO/micro-agent

GPT-Code-UI: Python Code Generation and Execution Interface

An open source implementation of OpenAI's ChatGPT Code interpreter that enables users to generate and execute Python code through natural language prompts.

Key Features:
  • Generates and executes Python code based on natural language requests
  • Includes file upload and download capabilities for data handling
  • Maintains context awareness across conversations for coherent interactions
  • Runs code in a Python kernel with support for common data science libraries like NumPy, Pandas, and Matplotlib
  • Offers flexible configuration options through environment variables and Docker deployment
  • Can be installed with a simple pip command and launched via 'gptcode' in the terminal
Stars: 3572 ⭐
Source: https://github.com/ricklamers/gpt-code-ui

Dataherald: Natural Language to SQL Query Engine

Dataherald enables natural language querying of relational databases through an API, allowing users to get insights from databases using plain English questions.

Key Features:
  • Transforms natural language questions into SQL queries for enterprise-level databases
  • Consists of four main components: core engine, enterprise API layer, admin console GUI, and Slackbot integration
  • Enables direct database querying for business users without requiring data analyst intervention
  • Supports integration with SaaS applications and can create ChatGPT plugins from proprietary data
  • Runs through Docker containers with customizable environment variables for each service
  • Open-source architecture allows for community contributions and continuous development
Stars: 3407 ⭐
Source: https://github.com/Dataherald/dataherald

Jupyter AI: LLM Integration for Jupyter Notebooks

An extension that integrates LLMs with Jupyter notebooks, offering both a magic command interface and a chat UI in JupyterLab to enhance productivity and enable interactive AI-powered workflows.

Key Features:
  • The `%%ai` magic command transforms notebooks into LLM playgrounds, compatible with any environment running IPython kernel
  • Native chat UI in JupyterLab provides a conversational assistant interface
  • Local model support through GPT4All and Ollama enables private model usage on consumer-grade machines
  • Supports HTML, math rendering, and IPython expression interpolation in generated outputs
  • Offers flexible installation options via pip or conda, with modular provider dependencies
  • Example use: Generate code snippets through natural language prompts in notebook cells
  • Example use: Create HTML and mathematical content that automatically renders in notebook outputs
Stars: 3362 ⭐
Source: https://github.com/jupyterlab/jupyter-ai

Devon: Open-source Pair Programming Assistant

A pair programming assistant that integrates with your development environment, offering code editing, exploration, and testing capabilities through both terminal and graphical interfaces.

Key Features:
  • Multi-file code editing capabilities, supporting simultaneous changes across multiple files
  • Codebase exploration tools to help understand and navigate project structure
  • Automated test writing and bug fixing functionality
  • Available through both terminal interface (TUI) and Electron-based graphical interface
  • Installation via pipx for the backend and npm for the interface components
  • File and code referencing system to maintain context during development sessions
  • Code indexing and context gathering abilities for improved navigation
  • Example use: Access the tool via terminal with "devon-tui" after setting up API keys and installing required components
  • Example use: Run in debug mode using "devon-tui --debug" for additional debugging information
Stars: 3349 ⭐
Source: https://github.com/entropy-research/Devon

Twinny: AI-Assisted Coding Extension for VS Code

A Visual Studio Code extension that enhances coding productivity through advanced LLM-powered features, including real-time code suggestions and interactive code discussions.

Key Features:
  • Real-time code completion with fill-in-the-middle functionality, providing contextual suggestions while coding
  • Interactive code discussion through a sidebar chat interface for explanations, testing, and refactoring suggestions
  • Workspace embeddings to provide context-aware assistance by understanding your project structure
  • Integration with Symmetry Network for P2P AI inference resource sharing
  • Works both online and offline, with customizable API endpoints and preserved chat conversations
  • Automated features including git commit message generation and code block extraction to new documents
  • Flexible visualization options including side-by-side diff views and full-screen chat mode
Stars: 3271 ⭐
Source: https://github.com/twinnydotdev/twinny

Mods: LLM Integration for Command Line Pipelines

A command-line tool that integrates LLMs into terminal pipelines, allowing processing and transformation of command outputs into various formats like Markdown and JSON.

Key Features:
  • Processes standard input with LLM-powered transformations, enabling users to "question" command outputs.
  • Saves conversations locally with SHA-1 identifiers and titles for easy reference and continuation.
  • Supports multiple output formats including Markdown and JSON, with customizable formatting options.
  • Provides custom roles functionality to set specific system prompts for specialized tasks.
  • Offers extensive configuration options including temperature, token limits, and HTTP proxy support.
Stars: 3251 ⭐
Source: https://github.com/charmbracelet/mods

Cody: Context-Aware AI Coding Assistant

An open-source coding assistant that leverages LLMs and codebase context to help developers understand, write, and fix code within their IDE.

Key Features:
  • Integrates semantic search to retrieve relevant files from codebases, providing contextual answers to coding questions through chat. Users can @-mention specific files or add remote repositories as context.
  • Offers single-line and multi-line code suggestions while typing, streamlining coding by suggesting function and variable names.
  • Provides inline editing capabilities for code fixes and refactoring from any location in a file.
  • Features customizable quick prompts for common actions like documenting code, explaining functionality, and generating unit tests.
  • Available for VS Code, JetBrains IDEs, and web browsers, with free access to Claude 3.5 Sonnet for individual developers.
  • Supports both personal and enterprise use, with options for dedicated instances and custom model integration in the enterprise version.
Stars: 3181 ⭐
Source: https://github.com/sourcegraph/cody

AutoDev for IntelliJ: AI-Powered Development Assistant

A comprehensive IDE plugin that enhances development workflow through LLM integration, offering code generation, testing, documentation, and custom AI agent capabilities.

Key Features:
  • Supports multiple programming languages including Java, Kotlin, JavaScript/TypeScript, Rust, Python, and Golang with context-aware coding assistance
  • Provides AutoCRUD functionality for Spring framework, automatically generating Model-Controller-Service-Repository code based on DevTi Protocol
  • Features Sketch functionality for real-time code editing, diff comparisons, terminal interface, and diagram generation with various supported formats
  • Implements smart coding assistance including bug detection, code explanation, exception tracing, and commit message generation
  • Enables customization of prompts, intention actions, LLM servers, and team-specific AI configurations
  • Integrates with development lifecycle tools for code review, refactoring, Docker configuration, and CI/CD setup
  • Example use: Create an AutoCRUD implementation using "devti://story/github/1102" to generate full stack code structure
  • Example use: Generate context-aware SQL queries through AutoSQL functionality with installed Database plugin
Stars: 3128 ⭐
Source: https://github.com/unit-mesh/auto-dev

GPTme: Terminal-Based Personal AI Assistant with Code Execution

A terminal-based assistant that serves as an unconstrained local alternative to ChatGPT's Code Interpreter, providing code execution, file manipulation, and web browsing capabilities.

Key Features:
  • Executes code directly in local environment using shell and Python tools
  • Reads, writes, and modifies files with incremental patch functionality
  • Browses web pages and processes images through vision capabilities
  • Self-corrects through feedback loop of output responses
  • Operates as a long-running agent with persistence options
  • Includes smart completion and highlighting for commands and paths
  • Provides web UI frontend and REST API for remote interaction
  • Supports desktop interaction through computer use tool
Stars: 3128 ⭐
Source: https://github.com/ErikBjare/gptme

GPT Repository Loader: Git Repository to Text Converter for LLM Processing

A command-line tool that transforms Git repositories into structured text format for LLM processing, enabling tasks like code review and documentation generation.

Key Features:
  • Converts Git repository contents while maintaining file structure and content integrity
  • Generates a single text file output that can be used as input for LLM processing
  • Supports optional preamble files to provide additional context or instructions
  • Requires only Python 3 and minimal setup to run
  • Example use: python gpt_repository_loader.py /path/to/git/repository -o /path/to/output_file.txt
Stars: 2898 ⭐
Source: https://github.com/mpoon/gpt-repository-loader

O1-Engineer: Development Workflow Automation CLI Tool

A command-line tool that streamlines development workflows by offering code generation, project planning, and file management capabilities through OpenAI's API.

Key Features:
  • Generates code and manages project files through an interactive command-line interface
  • Provides project planning functionality to systematically create files and directories
  • Includes file management commands for adding, editing, and reviewing code
  • Maintains conversation history for continued context in development sessions
  • Supports both Grok and OpenAI APIs for code generation and analysis
  • Installation requires Python 3.7+ and setting up API keys in an .env file
Stars: 2869 ⭐
Source: https://github.com/Doriandarko/o1-engineer

AutoCodeRover: Autonomous Program Improvement with LLMs

An automated system that resolves GitHub issues by combining LLMs with code analysis and debugging capabilities to generate patches, achieving 30.67% efficacy on SWE-bench lite benchmark.

Key Features:
  • Uses program structure-aware code search APIs that navigate the codebase through abstract syntax trees rather than plain text matching
  • Implements a two-stage process - context retrieval for finding relevant code, followed by patch generation based on the retrieved context
  • Leverages statistical fault localization with test suites when available to improve repair rates
  • Runs in three modes: GitHub issues, local repositories, and SWE-bench tasks
  • Integrates with multiple foundation models like GPT-4, Claude, and Llama through various providers
  • Example use: Successfully fixed Django issue #32347 by automatically identifying the bug location and generating an appropriate patch
Stars: 2806 ⭐
Source: https://github.com/AutoCodeRoverSG/auto-code-rover

Chatblade: CLI Tool for ChatGPT Interaction

A command-line interface tool that enables sophisticated interactions with ChatGPT, offering features like piped input handling, session management, and response formatting.

Key Features:
  • Accepts both piped input and command-line arguments, with support for saving common prompt preambles
  • Manages conversation sessions, allowing users to continue previous chats and maintain multiple distinct conversations
  • Provides advanced formatting options including JSON and Markdown extraction from responses
  • Includes token counting and cost estimation functionality to track API usage
  • Supports custom system prompts through configuration files stored in ~/.config/chatblade/
  • Offers multiple installation methods via pip, git, or Homebrew
  • Example use: Extract code or JSON from responses with `chatblade -e write me a python boilerplate script that starts a server > main.py`
  • Example use: Process RSS feeds with `curl https://news.ycombinator.com/rss | chatblade given the above rss show me the top 3 articles about AI`
Stars: 2586 ⭐
Source: https://github.com/npiv/chatblade

LSP-AI: AI-Powered Language Server for Code Editors

A language server implementing LLM capabilities like in-editor chat and code completions, compatible with any editor that supports the Language Server Protocol (LSP).

Key Features:
  • Integrates with popular code editors including VS Code, NeoVim, Emacs, Helix, and Sublime through LSP support
  • In-editor chat functionality allows direct communication with LLMs while working in the codebase
  • Custom actions enable code refactoring and completions with chain of thought prompting
  • Functions as an alternative to GitHub Copilot, offering code completions through various LLM backends
  • Unifies AI features into a single backend, eliminating the need for editor-specific implementations
  • Supports multiple LLM backend options including llama.cpp, Ollama, and compatible APIs from OpenAI, Anthropic, Gemini, and Mistral AI
  • Currently stable and in active use, with all core features implemented while remaining open to future developments
Stars: 2380 ⭐
Source: https://github.com/SilasMarvin/lsp-ai

GPTCommit: LLM-Powered Git Commit Message Generator

A tool that automatically generates clear and comprehensive git commit messages using LLMs, allowing developers to focus on writing code rather than crafting commit messages.

Key Features:
  • Integrates seamlessly with git as a prepare-commit-msg hook, activating automatically when running git commit
  • Configurable through a TOML file or environment variables, with options for API settings, language preferences, and output formatting
  • Supports multiple languages including English, Chinese (Simplified and Traditional), and Japanese
  • Provides proxy configuration support for accessing OpenAI services through custom endpoints
  • Includes an option to re-summarize amended commits and ignore specific files
  • Installation available through Cargo package manager or Homebrew for macOS users
Stars: 2348 ⭐
Source: https://github.com/zurawiki/gptcommit

Engshell: Natural Language Command Line Interface

A command-line interface that transforms English language commands into executable Python code using LLMs, supporting a wide range of operations across operating systems.

Key Features:
  • Translates natural language commands into executable Python code, handling tasks from file management to data analysis
  • Automatically debugs failed code execution and installs missing packages
  • Maintains context memory for complex multi-step operations, clearable with the 'clear' command
  • Integrates with external services and APIs for tasks like weather checking, web scraping, and image generation
  • Executes complex data processing tasks including mathematical computations, visualization, and document creation
  • Example use: Generate slides about Eddington Luminosity from Wikipedia sections
  • Example use: Record screen activity for 10 seconds and save as mp4
Stars: 2183 ⭐
Source: https://github.com/emcf/engshell

BurpGPT: Security Vulnerability Detection Using LLMs in Burp Suite

A Burp Suite extension that analyzes web traffic through OpenAI models to detect security vulnerabilities, providing automated security reports based on customizable prompts.

Key Features:
  • Implements passive scan functionality to analyze HTTP traffic using OpenAI models for comprehensive security assessment
  • Enables customizable prompts using a placeholder system to tailor analysis for specific security needs
  • Allows control over token usage through adjustable maximum prompt length settings
  • Integrates seamlessly with Burp Suite's native features for displaying and analyzing results
  • Provides troubleshooting through Burp Event Log for resolving OpenAI API communication issues
  • Example use: Identify vulnerabilities in web applications using specific crypto libraries affected by CVEs
  • Example use: Analyze security vulnerabilities in biometric authentication processes by examining request and response data
Stars: 2032 ⭐
Source: https://github.com/aress31/burpgpt

Autodoc: Codebase Documentation Generator with LLMs

A toolkit that automatically generates documentation for git repositories by analyzing codebase contents through LLMs. Documentation lives within the codebase and allows developers to query information about the code through a CLI interface.

Key Features:
  • Performs depth-first traversal of repository contents to generate comprehensive documentation for files and folders
  • Provides a CLI interface to query the documentation, delivering specific answers with references back to code files
  • Supports selective reindexing of only changed files when running subsequent documentation updates
  • Calculates token usage and cost estimates before indexing, optimizing model selection based on file sizes
  • Generates documentation that stays with the codebase through version control, ensuring accessibility
  • Example use: Generate documentation for your own repository by running 'doc init' followed by 'doc index' commands in the project root Example use: Query the Autodoc repository itself by cloning it, setting up OpenAI API key, and using 'doc q' to ask questions about the codebase
Stars: 2028 ⭐
Source: https://github.com/context-labs/autodoc

CodeCompanion.nvim: Interactive LLM Programming Assistant for Neovim

A Neovim plugin that integrates LLM capabilities into the editor, combining features from Copilot Chat and Zed AI to enhance the coding experience.

Key Features:
  • Performs inline code transformations, creation, and refactoring with real-time suggestions
  • Implements variables, slash commands, agents/tools, and workflows to optimize LLM interactions
  • Includes a built-in prompt library for common tasks like LSP error assistance and code explanations
  • Enables multiple simultaneous chat sessions with asynchronous execution for improved performance
  • Provides customization options for creating personal prompts, variables, and slash commands
  • Integrates with major LLM providers including Anthropic, OpenAI, Gemini, and others
Stars: 1881 ⭐
Source: https://github.com/olimorris/codecompanion.nvim

gptel: LLM Chat Client for Emacs

A versatile Emacs client for interacting with LLMs that works uniformly across any buffer.

Key Features:
  • Functions anywhere in Emacs through commands like gptel-send, allowing text interaction up to cursor position or within selected regions
  • Streams responses asynchronously and supports both ad-hoc interactions and dedicated chat buffers
  • Enables context additions from external regions, buffers, or files, which are dynamically updated with each query
  • Includes experimental tool integration, allowing LLMs to execute Emacs functions with proper specifications
  • Offers region rewriting capabilities with preview, diff, and merge options
  • Saves conversations as regular Markdown/Org/Text files that can be resumed later
  • Works without external dependencies using built-in url-retrieve, though can utilize Curl when available
Stars: 1880 ⭐
Source: https://github.com/karthink/gptel

Dev-GPT: Automated Microservice Development Team

A virtual development team powered by LLMs that automatically creates custom microservices based on user descriptions, handling everything from concept to deployment.

Key Features:
  • Creates complete microservices from simple text descriptions, utilizing virtual Product Manager, Developer, and DevOps roles
  • Generates code iteratively until test scenarios pass, taking 5-15 minutes per microservice
  • Supports local Docker deployment with browser-based playground for testing
  • Offers cloud deployment integration through Jina Cloud platform
  • Provides configuration options for web content search capabilities through Google Custom Search integration
Stars: 1798 ⭐
Source: https://github.com/jina-ai/dev-gpt

Rawdog: Auto-executing Python Script Generator for CLI Interactions

A CLI assistant that responds to queries by generating and automatically executing Python scripts, offering a novel alternative to RAG through recursive self-context generation.

Key Features:
  • Generates and executes Python scripts automatically in response to natural language commands
  • Self-selects context by running scripts to print information and incorporating the output into the conversation
  • Operates in both direct (single command) and conversation modes for flexible interaction
  • Includes safety features like '--leash' mode for script approval before execution and configurable retry attempts
  • Configurable through YAML file to work with various providers and models via litellm integration
Stars: 1793 ⭐
Source: https://github.com/AbanteAI/rawdog

SolidGPT: Code and Workspace Semantic Search Assistant

A VS Code extension that enables semantic search and interaction with codebases and Notion workspaces through LLM integration.

Key Features:
  • Codebase analysis and semantic search capabilities to help developers understand and navigate their code efficiently.
  • Integration with Notion workspaces for document search, project sprint board tracking, and ticket management.
  • Supports workspace onboarding for up to 500 files, with optimal performance for repositories under 100 files.
  • Maintains data privacy by not collecting user data while utilizing OpenAI's API for processing.
  • Available as a VS Code extension with options to build from source for customization.
Stars: 1772 ⭐
Source: https://github.com/AI-Citizen/SolidGPT

Code Review GPT: Automated Code Review Tool for CI/CD Pipelines

A code review automation tool that integrates with CI/CD pipelines to provide feedback on potential code issues and improvements using LLMs.

Key Features:
  • Integrates directly into CI/CD pipelines to perform automated code reviews
  • Detects common issues like exposed secrets, inefficient code, and readability problems
  • Supports local code review of staged files through command line interface
  • Runs as a GitHub action for seamless integration with pull requests
  • Installation requires minimal setup with npm/bun and an OpenAI API key
Stars: 1765 ⭐
Source: https://github.com/mattzcarey/code-review-gpt

CodeCursor: VS Code Extension for LLM-Powered Code Generation

A Visual Studio Code extension that brings Cursor's code generation and chat capabilities directly into VS Code, allowing developers to leverage LLM assistance without leaving their preferred editor.

Key Features:
  • Generates entire projects from scratch using LLMs within VS Code's workspace environment
  • Creates and modifies code through natural language prompts, with live-streamed results shown as text diffs
  • Integrates a chat interface for code-related discussions, allowing questions about opened documents or selected text
  • Supports custom OpenAI API keys for improved reliability and model selection
  • Maintains security by limiting data transmission to only the specific documents being processed
  • Example use: Open a document, type 'CodeCursor' in Command Palette, enter your prompt, and receive generated code with live streaming updates
  • Example use: Click the CodeCursor icon on the Activity Bar to open a chat panel where you can discuss your code interactively
Stars: 1753 ⭐
Source: https://github.com/Helixform/CodeCursor

Pythagora: Automated Test Generation Using LLMs

A tool that autonomously generates comprehensive test suites for JavaScript code using GPT-4, focusing primarily on unit tests with capabilities for integration testing.

Key Features:
  • Analyzes code structure through AST parsing to identify functions and their relationships before generating relevant tests
  • Generates unit tests for single functions, entire files, or complete folders, with special effectiveness for standalone helper functions
  • Detects edge cases and potential bugs in existing code during test generation
  • Offers a Visual Studio Code extension for seamless integration into development workflow
  • Includes test expansion functionality to improve code coverage of existing test suites
  • Example use: Generated 1,604 tests for Lodash library, discovering 11 bugs with only 4 hours of runtime
  • Example use: Created 98 tests for node-fs-extra in 30 minutes, identifying 2 bugs
Stars: 1749 ⭐
Source: https://github.com/Pythagora-io/pythagora

ReadmeAI: Automated README Generation with LLMs

A tool that automatically generates comprehensive README files for code repositories by analyzing codebases using repository processing and LLMs.

Key Features:
  • Generates detailed, structured README files from local or remote repositories with a single command
  • Offers extensive customization options including different templates, styles, badges, header types and navigation formats
  • Processes repositories from GitHub, GitLab, Bitbucket and local filesystems
  • Includes offline mode capability without requiring LLM API services
  • Automatically extracts and documents project structure, dependencies, installation steps and usage guides
  • Available through multiple installation methods including pip, pipx, Docker and from source
  • For example, you can customize the header style with: ```sh readmeai --repository your-repo --header-style modern --badge-style flat-square ```
  • For example, you can generate a README in offline mode with: ```sh readmeai --api offline -o readme.md -r your-repo ```
Stars: 1744 ⭐
Source: https://github.com/eli64s/readme-ai

Sample Chat App with Azure OpenAI: Customizable Web Chat Interface

A web application that enables chat functionality using Azure OpenAI, with support for multiple data sources and authentication options.

Key Features:
  • Supports multiple data sources including Azure AI Search, CosmosDB, Elasticsearch, Pinecone, MongoDB, and Azure SQL Server for knowledge integration
  • Customizable chat interface with configurable UI elements like logos, titles, and descriptions
  • Microsoft Entra ID integration for secure authentication and access control
  • Optional chat history tracking using CosmosDB with feedback capabilities
  • Vector search capabilities with support for various embedding models and hybrid search options
  • Deployment options through Azure Developer CLI, one-click Azure deployment, or local deployment
  • Configurable parameters for controlling model behavior including temperature, tokens, and system messages
  • Scalability options through thread and worker configuration in gunicorn
Stars: 1728 ⭐
Source: https://github.com/microsoft/sample-app-aoai-chatGPT

AI-Renamer: Intelligent File Renaming with LLM Analysis

A Node.js CLI tool that automatically renames files based on their content analysis using LLMs through Ollama, LM Studio, or OpenAI platforms.

Key Features:
  • Analyzes and renames videos, images, and other files based on their content through LLM analysis
  • Works with multiple LLM providers including Ollama (default), LM Studio, and OpenAI
  • Supports various case styling options (camelCase, kebabCase, etc.) and customizable character limits for filenames
  • Provides configuration options for frame extraction from videos, output language, and custom prompts
  • Saves configuration settings locally for repeated use
  • Requires Ollama or LM Studio installation, plus ffmpeg for video processing
Stars: 1660 ⭐
Source: https://github.com/ozgrozer/ai-renamer

x-crawl: LLM-Assisted Web Crawling Library for Node.js

A flexible Node.js web crawling library that combines traditional crawling capabilities with LLM assistance to make web scraping more intelligent and adaptable to website changes.

Key Features:
  • Integrates with OpenAI's models to intelligently parse web content and adapt to site structure changes
  • Supports both dynamic and static page crawling, API data collection, and file downloads
  • Provides automated page operations, device fingerprinting, and proxy rotation capabilities
  • Offers flexible crawling modes including asynchronous/synchronous operations and configurable intervals
  • Example use: Can extract image URLs from vacation rental listings by having the LLM understand and parse the page semantics rather than relying on fixed selectors
Stars: 1620 ⭐
Source: https://github.com/coder-hxl/x-crawl

CodeRabbit: AI-Based GitHub PR Reviewer and Summarizer

A GitHub Action that performs automated code reviews and pull request summarization using OpenAI models. Currently in maintenance mode, with a recommended Pro version available.

Key Features:
  • Provides line-by-line code change suggestions and generates comprehensive PR summaries with release notes
  • Performs continuous, incremental reviews on each commit rather than one-time reviews, reducing costs and noise
  • Uses separate models for summarization and detailed review tasks, optimizing for both performance and cost-effectiveness
  • Enables interactive conversations with the bot for specific code sections or entire files
  • Implements smart review skipping for simple changes while allowing customization through configurable prompts
  • Installation requires adding a YAML workflow file and configuring GitHub Token and OpenAI API credentials
  • Example use: Can be configured as a specialized blog reviewer by customizing the system message to focus on DevRel and content quality aspects
  • Example use: Supports reviewing pull requests from forks through pull_request_target event configuration
Stars: 1616 ⭐
Source: https://github.com/coderabbitai/ai-pr-reviewer

TokenCost: Token Counting and Price Estimation for LLM Applications

A Python library that calculates USD costs for LLM API usage by estimating token counts and prices for prompts and completions.

Key Features:
  • Tracks and maintains updated pricing for major LLM providers and models
  • Performs accurate token counting for prompts before sending API requests using tiktoken
  • Calculates costs for both prompt tokens and completion tokens
  • Integrates with frameworks like LlamaIndex through callback handlers
  • Installation is simple via pip install tokencost
Stars: 1536 ⭐
Source: https://github.com/AgentOps-AI/tokencost

Coffee: AI-Powered React UI Development Tool

A development tool that accelerates UI creation and iteration in React codebases by automatically generating and modifying components based on natural language prompts.

Key Features:
  • Monitors changes in JS/JSX/TS/TSX files and generates React components based on natural language descriptions
  • Seamlessly integrates with existing React projects including Next.js and Remix frameworks
  • Uses Docker for isolated code execution and runs independently alongside your development environment
  • Allows component editing through simple prop additions, with the 'coffee' prop for modifications and 'pour' prop for finalizing components
  • Generates production-ready code while maintaining clean and maintainable standards
Stars: 1478 ⭐
Source: https://github.com/Coframe/coffee

JSON Repair: Python Module for Fixing Invalid JSON

A lightweight Python package designed to repair invalid JSON strings, particularly useful for handling imperfect JSON outputs from LLMs.

Key Features:
  • Repairs common syntax errors like missing quotes, misplaced commas, and incomplete key-value pairs
  • Fixes malformed arrays and objects by adding necessary elements and cleaning up non-JSON characters
  • Auto-completes missing JSON values with reasonable defaults
  • Supports file-based operations through drop-in replacements for standard JSON functions
  • Handles non-Latin characters with proper encoding options
  • Example use: repair_json("{'test_chinese_ascii':'统一码'}", ensure_ascii=False) returns properly formatted JSON with preserved Chinese characters
Stars: 1400 ⭐
Source: https://github.com/mangiucugna/json_repair

Sandbox: Cloud-Based Collaborative Code Editor with LLM Autocompletion

An open-source development environment that combines real-time collaboration features with code autocompletion powered by LLMs, running entirely in the cloud.

Key Features:
  • Cloud-based code editing with Monaco editor integration, enabling full-featured development directly in the browser
  • Real-time collaboration capabilities through Liveblocks, allowing multiple developers to work simultaneously
  • Secure Linux sandbox environments through E2B for terminal access and live preview functionality
  • Microservices architecture using Cloudflare Workers for database (D1), storage (R2), and LLM functionality
  • User authentication handled through Clerk with a Socket.io server managing real-time connections
  • Project file management system with automatic storage and retrieval capabilities
Stars: 1358 ⭐
Source: https://github.com/ishaan1013/sandbox

Local AI Stack: Self-hosted Document Q&A System

An open-source solution for running document question-answering applications completely locally, eliminating the need for cloud services or external API costs.

Key Features:
  • Runs entirely locally using Ollama for LLM inference and Supabase pgvector for vector storage
  • Uses transformers.js and all-MiniLM-L6-v2 for generating embeddings locally without external API calls
  • Built on Next.js with Langchain.js for LLM orchestration and document processing
  • Simple setup process involving local Supabase instance and Ollama installation
  • Can be optionally extended to use cloud services like OpenAI, Pinecone, and Replicate
Stars: 1352 ⭐
Source: https://github.com/ykhli/local-ai-stack

Agentless: LLM-Based Software Development Problem Solver

A three-phase system that automatically solves software development issues through localization, repair, and patch validation, without using traditional agent-based approaches.

Key Features:
  • Uses a hierarchical process to locate faults at different levels - from files to specific edit locations
  • Generates multiple candidate patches in diff format and validates them using regression tests
  • Creates additional reproduction tests to verify original error fixes
  • Achieves 27.3% solve rate on SWE-bench lite with an average cost of $0.34 per issue
  • Operates through a straightforward workflow that doesn't require complex agent architectures
Stars: 1346 ⭐
Source: https://github.com/OpenAutoCoder/Agentless

TLM: Local CLI Copilot using CodeLLaMa

A command-line interface tool that provides shell command suggestions and explanations using CodeLLaMa, running completely locally without requiring internet connectivity or API keys.

Key Features:
  • Operates fully offline on your local machine, eliminating the need for internet connection or external API subscriptions
  • Provides command suggestions and explanations for shell operations across macOS, Linux, and Windows
  • Automatically detects and works with different shells including Powershell, Bash, and Zsh
  • Requires Ollama for model management, which can be run natively or through Docker
  • Installation options include a platform-specific installation script or Go install command for users with Go 1.21+
Stars: 1322 ⭐
Source: https://github.com/yusufcanb/tlm

Gorilla CLI: Natural Language Command Line Interface

A tool that converts natural language requests into executable command-line commands, supporting over 1500 APIs including major cloud platforms, development tools, and system utilities.

Key Features:
  • Transforms plain English descriptions into precise command-line commands, eliminating the need to memorize complex syntax
  • Presents multiple command options for each request, allowing users to choose the most appropriate one
  • Maintains user privacy by requiring explicit approval for command execution and never collecting command output data
  • Includes command history functionality for easy access to previously executed commands
  • Supports major platforms and tools including Kubernetes, AWS, GCP, Azure, GitHub, and Conda
Stars: 1317 ⭐
Source: https://github.com/gorilla-llm/gorilla-cli

gen.nvim: LLM Text Generation in Neovim

A Neovim plugin for generating text using LLMs with customizable prompts and direct integration with Ollama models.

Key Features:
  • Integrates with Ollama to provide text generation capabilities within Neovim, supporting models like llama2 and mistral
  • Customizable display modes including floating windows and split views for generated content
  • Maintains conversation context for follow-up questions and interactions
  • Configurable prompts system allowing users to define and modify text generation templates
  • Provides visual selection integration and buffer-aware commands with support for different file types
  • Allows text replacement and extraction using regular expressions for generated content
  • Example use: ":Gen Enhance_Grammar_Spelling" command to improve selected text
  • Example use: Create custom prompts like "Fix_Code" that automatically formats and fixes code in the current buffer
Stars: 1317 ⭐
Source: https://github.com/David-Kunz/gen.nvim

CodeGPT: Git Commit Message and Code Review Generator using LLMs

A Go-based CLI tool that generates git commit messages and code review summaries using OpenAI and other LLM providers, with git hook integration.

Key Features:
  • Generates commit messages and code review summaries through multiple service providers including Azure OpenAI, Gemini, Anthropic, Ollama, Groq, and OpenRouter.
  • Integrates with Git through prepare-commit-msg hook for automatic commit message generation during your normal git workflow.
  • Supports conventional commits specification and customizable commit message templates with variable injection.
  • Provides translation capabilities for commit messages and reviews into Traditional Chinese, Simplified Chinese, or Japanese.
  • Offers flexible configuration options including proxy support, context line customization, and file exclusion patterns.
  • Example use: You can generate a commit message with preview: `git add . && codegpt commit --preview`
  • Example use: Generate a code review in Traditional Chinese: `codegpt review --lang zh-tw`
Stars: 1315 ⭐
Source: https://github.com/appleboy/CodeGPT

Lingo.dev: AI-Powered Software Localization Automation

A localization automation platform that integrates with CI/CD pipelines to provide instant translations across 60+ languages, specifically designed for web and mobile applications.

Key Features:
  • Integrates into development workflows through CLI tools and GitHub Actions, requiring minimal setup time
  • Supports multiple file formats including JSON, YAML, CSV, and Markdown, with automatic syncing of updated content
  • Leverages context-aware translation engine to produce native-quality translations that match product context
  • Operates directly within CI/CD pipelines, automatically maintaining translations with each code push
  • Processes translations instantly, enabling teams to localize content 100x faster than traditional methods
Stars: 1311 ⭐
Source: https://github.com/lingodotdev/lingo.dev

LLM VSCode: Code Generation and Completion Extension

A VSCode extension providing LLM-powered code completion and generation capabilities through multiple backend options like Hugging Face Inference API, Ollama, and OpenAI-compatible services.

Key Features:
  • Offers ghost-text code completion similar to GitHub Copilot, with support for multiple backend services
  • Automatically sizes prompts to fit within context windows using tokenizers, ensuring optimal model performance
  • Includes code attribution feature to check if generated code matches content from The Stack dataset
  • Configurable suggestion behavior with customizable document filters and keybindings
  • Integration with multiple backend services including Hugging Face Inference API, Ollama, and OpenAI-compatible endpoints
  • Supports various models like Code Llama, Phind, and WizardCoder, easily configurable through VSCode settings
Stars: 1254 ⭐
Source: https://github.com/huggingface/llm-vscode

oterm: Terminal Client for Ollama

A text-based terminal interface for interacting with Ollama models, featuring persistent chat sessions and tools integration.

Key Features:
  • Simple terminal-based interface that runs directly by typing 'oterm' without additional server or frontend setup.
  • Stores multiple chat sessions with customizable system prompts and parameters in SQLite database.
  • Integrates with various tools to provide external information to models, including web content fetching, weather data, and shell commands.
  • Supports Model Context Protocol (MCP) for bridging MCP servers with Ollama functionality.
  • Offers multiple themes and customizable keyboard shortcuts for enhanced user experience.
  • Enables image sharing in conversations and exports chats as markdown files.
Stars: 1216 ⭐
Source: https://github.com/ggozad/oterm

Patchwork: Automated Development Tasks Using Self-hosted LLM Agent

Patchwork is a CLI tool that automates software development tasks like PR reviews, bug fixing, and security patching using a self-hosted agent and your choice of LLMs.

Key Features:
  • Built on reusable atomic actions called Steps that perform operations like creating PRs, committing changes, or calling LLMs
  • Uses customizable prompt templates optimized for specific development tasks like code generation, issue analysis, and vulnerability fixes
  • Supports multiple deployment options - can run locally in CLI/IDE or as part of CI/CD pipelines
  • Compatible with any OpenAI-compatible endpoint, enabling use with various providers like Groq, Together AI, or local models via llama.cpp
  • Includes pre-built Patchflows for common tasks like generating docstrings, reviewing PRs, fixing vulnerabilities, and updating dependencies
  • Simple installation via pip with modular dependency groups for different functionality needs
Stars: 1215 ⭐
Source: https://github.com/patched-codes/patchwork

CodeGPT: Open-source AI Copilot for JetBrains IDEs

A comprehensive code assistant that integrates with JetBrains IDEs, offering both cloud-based and self-hosted options for enhanced development workflows.

Key Features:
  • Stream AI-suggested code changes directly into the editor with real-time preview in diff view
  • Chat functionality supports images, project files, web docs, git history references, and web search integration
  • Multi-line code editing based on recent activity and context, including autocomplete for single lines and whole functions
  • Natural language code editing allows describing desired changes to highlighted code sections
  • Generate context-aware naming suggestions for methods and variables, along with descriptive commit messages
  • Runs locally through Gradle with support for different platforms including Linux, macOS, and Windows ARM64
  • Prioritizes privacy by not collecting sensitive information, with optional anonymous usage data collection
Stars: 1214 ⭐
Source: https://github.com/carlrobertoh/CodeGPT

Rapidpages: LLM-Powered UI Component Generation IDE

A prompt-first IDE that converts natural language descriptions into React+Tailwind components, streamlining the UI development process.

Key Features:
  • Converts text descriptions into functional React components with Tailwind CSS styling
  • Provides immediate visual feedback through an integrated IDE environment
  • Available both as a local installation and cloud service with free credits
  • Integrates with GitHub authentication for secure access
  • Current single-shot component generation will evolve into a multi-step process for more complex UI elements
Stars: 1198 ⭐
Source: https://github.com/rapidpages/rapidpages

Kaguya: ChatGPT Plugin for Local File Management and Script Execution

A plugin that enables ChatGPT to interact with local files and execute Python, JavaScript, and bash scripts within a controlled environment.

Key Features:
  • Manages local files through various operations like reading, editing, creating, and deleting within a designated FILES folder
  • Executes Python, JavaScript, and bash scripts through ChatGPT interface
  • Provides search and replace functionality for efficient file editing, particularly useful for larger files
  • Implements Docker containerization for secure and isolated execution environment
  • Offers comprehensive API endpoints for file system operations and command execution
  • For files above 100 lines, uses search-and-replace instead of whole file updates to maintain performance
  • Runs on localhost port 3000 with access restricted to files within its directory for security
  • Example use: Navigate to subdirectories using cd command before executing operations in that location
Stars: 1196 ⭐
Source: https://github.com/ykdojo/kaguya

Sage: Interactive Codebase Chat System

An open-source tool that helps developers understand and integrate codebases through natural language interaction, similar to GitHub Copilot but focused on codebase comprehension.

Key Features:
  • Simple setup process through a quickstart guide for immediate deployment
  • Flexible deployment options allowing both local execution using Ollama and Marqo for privacy, or cloud-based operation with third-party LLM providers
  • Advanced retrieval capabilities combining lightweight strategies and traditional RAG (Retrieval Augmented Generation), with customizable parameters for optimal codebase interaction
  • Pre-indexed open-source repositories available through hosted platform, with ability to index new repositories via GitHub URL
  • Modular architecture supporting custom embeddings, vector stores, and LLM providers through abstract class implementation
  • Comprehensive benchmarking documentation comparing various embedding and retrieval strategies
Stars: 1161 ⭐
Source: https://github.com/Storia-AI/sage

w2vgrep: Semantic Text Search Using Word Embeddings

A command-line tool that performs semantic searches in text using word embeddings to find conceptually similar matches, going beyond traditional string matching.

Key Features:
  • Uses word embeddings to find semantically similar matches, with configurable similarity thresholds
  • Provides context display options with color-coded output and line numbers
  • Reads from files or standard input in a grep-like interface
  • Supports multiple languages through different word embedding models
  • Offers model size reduction capabilities to improve performance while maintaining accuracy
  • Configurable through JSON files and command-line arguments for flexibility
Stars: 1133 ⭐
Source: https://github.com/arunsupe/semantic-grep

Dropbase: AI-Powered Python Web App Builder

A local-first platform that combines drag-and-drop functionality with code generation to build custom web applications like admin panels, dashboards, and internal tools.

Key Features:
  • Generates and allows verification/editing of Python code while providing pre-built UI components, eliminating the need for frontend development
  • Supports integration with any PyPI package and custom business logic implementation
  • Runs locally and self-hosted, ensuring credential security and easy integration with existing codebases
  • Creates portable applications that can be shared between users through simple folder zipping
  • Connects to various data sources and can trigger actions across internal/external services
Stars: 1116 ⭐
Source: https://github.com/DropbaseHQ/dropbase

SudoLang: Natural Language Programming for LLMs

SudoLang is a specialized programming language that enables collaborative programming with LLMs, featuring constraint-based natural language syntax and interface-driven design.

Key Features:
  • Natural language constraints allow defining complex behaviors by specifying rules and desired outcomes rather than explicit instructions
  • Interface-based architecture supports modular, reusable, and composable program structures with automatic type inference
  • Built-in semantic pattern matching enables intelligent state inference and automated responses to specific conditions
  • Command-based interface for streamlined program interactions and state management
  • Requires 20-30% fewer tokens than natural language prompts while maintaining readability through structured pseudocode
  • Supports visualization through Mermaid diagrams for architecture and flow control
  • Installation involves cloning the repository and installing the VS Code extension for syntax highlighting
  • Extensive documentation and learning resources available through articles, videos, and interactive tutorials
Stars: 1116 ⭐
Source: https://github.com/paralleldrive/sudolang-llm-support

S2A: Web Search Integration for LLM APIs

A tool that enables web search capabilities for LLM APIs including OpenAI, Gemini, and Moonshot, providing search, news, and webpage summarization without requiring plugin installation or API key changes.

Key Features:
  • Seamlessly integrates web search capabilities with existing LLM APIs through simple endpoint URL replacement
  • Automatically determines whether web search is needed based on user input, preserving standard LLM functionality like image generation and voice processing
  • Supports multiple deployment options including Zeabur, local deployment, Cloudflare Worker, and Vercel
  • Configurable search services through environment variables including Google, Bing, DuckDuckGo, and custom search APIs
  • Allows control over search result quantity and depth of webpage crawling through customizable settings
  • Provides both streaming and non-streaming output options for most supported models
Stars: 1108 ⭐
Source: https://github.com/fatwang2/search2ai

CodeFuse-ChatBot: AI Assistant for Software Development Lifecycle

An open-source AI assistant developed by Ant Group's CodeFuse team that streamlines the software development lifecycle through Multi-Agent scheduling, integrated tools, code repositories, knowledge bases, and sandbox environments.

Key Features:
  • Multi-Agent scheduling core enables configurable interactive agents with comprehensive scheduling capabilities
  • Deep code repository analysis with project-level code understanding, writing and generation
  • Enhanced document analysis through integrated knowledge bases and knowledge graphs with retrieval and reasoning capabilities
  • Custom DevOps knowledge base with one-click construction of domain-specific knowledge repositories
  • Sandbox environment for secure code compilation and execution in isolated conditions
  • Cross-platform integration with DevOps tools and platforms through API management
  • Multiple data source processing including web crawling, document loading, data cleaning and text segmentation
  • Offline private deployment support using open-source LLMs and embedding models
Stars: 1102 ⭐
Source: https://github.com/codefuse-ai/codefuse-chatbot

OSS-Fuzz-Gen: LLM-Powered Fuzz Target Generation Framework

A framework that generates and evaluates fuzz targets for C/C++/Java/Python projects through the OSS-Fuzz platform. It has discovered 30 new bugs/vulnerabilities and achieves up to 29% increased line coverage compared to human-written targets.

Key Features:
  • Generates fuzz targets for multiple programming languages, evaluated using compilability, runtime crashes, coverage, and line coverage metrics
  • Successfully created valid fuzz targets for 160 C/C++ projects, improving testing coverage beyond existing human-written targets
  • Evaluates generated targets against production data from OSS-Fuzz platform to ensure quality and effectiveness
  • Discovered significant vulnerabilities including OOB reads/writes, stack buffer underflows, and use-after-free bugs across multiple projects
  • Achieves substantial coverage improvements, with some projects showing over 90% total coverage gain
Stars: 1064 ⭐
Source: https://github.com/google/oss-fuzz-gen

Shell-AI: Natural Language to Shell Command Converter

A CLI utility that converts natural language descriptions into executable shell commands, using LLMs through LangChain integration.

Key Features:
  • Takes natural language input and generates up to 3 relevant shell command suggestions
  • Functions across Linux, macOS, and Windows platforms with Python 3.10+
  • Configurable through environment variables or config file for API keys, model selection, and output preferences
  • Supports multiple providers including OpenAI, Azure OpenAI, and Groq integration
  • Includes temperature control to balance between deterministic and creative command suggestions
  • Optional context mode for maintaining command history awareness
Stars: 1063 ⭐
Source: https://github.com/ricklamers/shell-ai

Kubectl-ai: Kubernetes Manifest Generation with LLMs

A kubectl plugin that generates and applies Kubernetes manifests using LLMs, helping developers avoid manual collection of manifests during development and testing.

Key Features:
  • Generates complete Kubernetes manifests from natural language descriptions
  • Supports piping input/output for integration with external editors and file operations
  • Uses Kubernetes OpenAPI Spec to ensure accurate manifest generation, including Custom Resource Definitions
  • Includes interactive confirmation prompts before applying changes, with optional bypass flag
  • Installs easily via Homebrew, Krew, or direct binary download
  • Can be used as a kubectl plugin or standalone binary
Stars: 1051 ⭐
Source: https://github.com/sozercan/kubectl-ai

AttackGen: Cybersecurity Incident Response Testing with MITRE ATT&CK

A tool that generates customized incident response scenarios by combining LLMs with the MITRE ATT&CK framework, providing tailored testing scenarios based on threat actor groups and organizational details.

Key Features:
  • Generates unique scenarios using both Enterprise and ICS MITRE ATT&CK matrices, displaying detailed techniques used by selected threat actors
  • Creates custom scenarios through predefined templates for common cyber incidents like phishing or ransomware attacks
  • Includes an interactive chat assistant for refining and updating generated scenarios
  • Downloads scenarios in Markdown format while capturing user feedback on scenario quality
  • Integrates with LangSmith for debugging, testing, and monitoring model performance
  • Available as a Docker container for easy deployment and uses .env file for secure credential management
  • Example use: Generate a scenario based on a specific threat actor group by selecting the organization's industry, size, and desired threat group, then receiving a detailed incident response testing scenario
  • Example use: Create a custom scenario by selecting specific ATT&CK techniques relevant to your testing needs, focusing on particular aspects of the cyber kill chain
Stars: 1039 ⭐
Source: https://github.com/mrwadams/attackgen

WPeChatGPT: IDA Plugin for Binary Analysis with LLMs

An IDA Pro plugin that leverages LLMs to assist in binary file analysis through function analysis, variable renaming, vulnerability detection, and automated binary analysis capabilities.

Key Features:
  • Analyzes function purpose, usage environment, and intended functionality within binary files
  • Performs automated variable renaming and attempts to restore functions using Python3 code generation
  • Identifies potential vulnerabilities in functions and generates corresponding exploit code
  • Includes Auto-WPeGPT feature for automated binary file analysis, generating detailed reports with function call trees and suspicious strings
  • Integrates with IDA Pro through right-click context menu, menu bar options, and customizable keyboard shortcuts
  • Supports both forward and reverse proxy configurations for API connectivity
Stars: 1021 ⭐
Source: https://github.com/WPeace-HcH/WPeChatGPT

SeaGOAT: Semantic Code Search Engine

SeaGOAT is a local search tool that uses vector embeddings to enable semantic code search within your codebase.

Key Features:
  • Uses vector embeddings through ChromaDB for semantic code search while running entirely locally without third-party APIs
  • Combines vector-based search with ripgrep for both semantic and regex/keyword-based matches
  • Processes multiple programming languages including Python, JavaScript, TypeScript, C++, Java, and others
  • Runs as a server to provide fast response times while allowing concurrent file processing and querying
  • Integrates with git by respecting .gitignore patterns and allows additional file exclusions through configuration
  • Example use: gt "Where are the numbers rounded" to search through code semantically
  • Example use: gt "function calc_* that deals with taxes" to combine semantic search with regex patterns
Stars: 1016 ⭐
Source: https://github.com/kantord/SeaGOAT

TerminalGPT: Chat Interface for LLMs in Terminal

A command-line tool that brings LLM chat capabilities directly to your terminal.

Key Features:
  • Integrates with multiple LLM providers including OpenAI, Anthropic, Groq, and Gemini through their APIs
  • Simple chat initiation through terminal command 'tgpt chat'
  • Manages conversations with ability to delete chat history
  • Can be installed globally or run directly through npx without installation
Stars: 1013 ⭐
Source: https://github.com/jucasoliveira/terminalGPT

Gp.nvim: LLM Integration for Neovim

A Neovim plugin that provides ChatGPT-like sessions, code operations, speech-to-text, and image generation capabilities directly within the editor.

Key Features:
  • Streams responses in real-time with ability to cancel mid-generation and single-step undo functionality
  • Supports multiple LLM providers including OpenAI, Ollama, GitHub Copilot, Perplexity.ai, and Anthropic's Claude
  • Provides chat sessions as markdown buffers with autosave and quick access through a popup window
  • Offers text/code operations like rewriting, appending, and prepending with support for visual selections and ranges
  • Includes speech-to-text capabilities for dictating comments, notes, and instructions using SoX for audio recording
  • Enables repository-specific custom instructions through .gp.md files for consistent code generation patterns
  • Allows extending functionality through hook functions that are automatically registered as commands
  • Can generate images directly within Neovim based on text prompts
Stars: 1003 ⭐
Source: https://github.com/Robitx/gp.nvim

FigmaChain: Converting Figma Designs to HTML/CSS Code

A Python-based solution that transforms Figma designs into HTML/CSS code using LLMs, featuring both command-line and chatbot interfaces.

Key Features:
  • Generates HTML/CSS code by processing Figma design files through the Figma RESTful API
  • Provides both a command-line interface and a Streamlit-based chatbot for interactive code generation
  • Requires minimal setup with just Python, OpenAI API key, and Figma credentials (access token, node IDs, file key)
  • Automatically saves generated code to output files and opens them in the default web browser
Stars: 964 ⭐
Source: https://github.com/cirediatpl/FigmaChain

Ditto: Self-Building Flask Application Generator from Natural Language

A tool that converts natural language descriptions into functional Flask web applications using a no-code interface and LLM processing.

Key Features:
  • Uses natural language input to generate complete Flask applications including routes, templates, and static files
  • Creates a modular code structure with organized directories for templates, static files, and routes
  • Automates the entire application building process through an LLM loop without requiring manual coding
  • Runs a web interface on localhost:8080 where users can input their application descriptions and monitor the generation progress
  • Requires minimal setup with just Python 3.7+ and an OpenAI API key
Stars: 939 ⭐
Source: https://github.com/yoheinakajima/ditto

AI Functions: LLM-Powered Function Generation and Execution

A Python implementation that enables creating and executing functions using GPT-4 or other LLM models. Users can define function signatures and descriptions, and the system generates working implementations.

Key Features:
  • Creates functional code from natural language descriptions and function signatures, returning structured results in Python data types
  • Thorough testing suite with documented success rates across different models and use cases
  • Provides simple API requiring only three main parameters: function signature, arguments, and description
  • Transparent about limitations, particularly noting reduced accuracy for mathematical calculations and precision-dependent tasks
  • Example use: generate fake people data by specifying desired structure: def fake_people(n: int) -> list[dict] returns list of dictionaries with names and ages
  • Example use: perform basic arithmetic with function signature "def add(a: int, b: int) -> int:" which returns the sum of two integers
Stars: 939 ⭐
Source: https://github.com/Torantulino/AI-Functions

ChatGPT Shell: Interactive LLM Shell Environment for Emacs

A comint-based Emacs shell that provides an interactive environment for communicating with multiple LLM providers through a familiar shell interface.

Key Features:
  • Integrates multiple LLM providers including OpenAI, Anthropic, Google Gemini, and Ollama through a unified shell interface
  • Features a compose buffer experience with navigation, reply functionality, and source block execution capabilities
  • Supports vision-based queries for image analysis and Japanese vocabulary extraction
  • Enables inline code modifications with diff-based confirmation before accepting changes
  • Includes code execution capabilities through org babel integration for various programming languages
  • Offers session management features like saving/restoring transcripts and buffer clearing
  • Example use: Create a compose buffer by selecting text and using M-x chatgpt-shell-prompt-compose to craft detailed queries
  • Example use: Execute source code blocks directly in the shell using C-c C-c, leveraging org babel functionality
Stars: 930 ⭐
Source: https://github.com/xenodium/chatgpt-shell

Kubernetes ChatGPT Bot: Automated Kubernetes Alert Resolution Assistant

A webhook-based bot that leverages LLMs to automatically provide solutions for Kubernetes Prometheus alerts through Slack.

Key Features:
  • Integrates with Prometheus to receive alerts via webhook and processes them using LLMs
  • Delivers alert responses and troubleshooting guidance directly in Slack
  • Built on Robusta.dev open-source platform for Kubernetes alert management
  • Simple setup through Helm charts and customizable playbook configurations
Stars: 929 ⭐
Source: https://github.com/robusta-dev/kubernetes-chatgpt-bot

PentestGPT: Automated Penetration Testing Assistant

An automated penetration testing tool that helps security teams conduct comprehensive tests of web applications, networks, and cloud environments without requiring expert skills.

Key Features:
  • Integrates scanning, exploiting, and analysis capabilities for web applications, networks, and cloud environments
  • Uses Supabase for secure data storage and multi-modal functionality, replacing previous browser-based storage
  • Supports both local and cloud-hosted deployment options through Docker and Vercel
  • Implements a comprehensive authentication system with email verification
  • Provides database migration capabilities for easy updates and maintenance
Stars: 921 ⭐
Source: https://github.com/hackerai-tech/PentestGPT

Privy: Local Open-Source GitHub Copilot Alternative

A VS Code extension that provides code completion and chat capabilities using locally-run LLMs, offering a privacy-focused alternative to GitHub Copilot.

Key Features:
  • Runs completely locally, ensuring privacy and data security while providing code completion and assistance
  • Offers real-time code completion with customizable debounce settings and model selection
  • Integrates chat functionality for code explanations, bug finding, and unit test generation
  • Supports threaded conversations to maintain context and improve response accuracy
  • Works with multiple LLM platforms including Ollama, llamafile, and llama.cpp
  • Easy installation through VS Code Marketplace or Open VSX Registry
  • Configurable through VS Code settings, allowing users to choose models and adjust functionality
Stars: 916 ⭐
Source: https://github.com/srikanth235/privy

Promptr: Code Modification CLI Tool Using LLMs

A command-line interface tool that allows developers to modify their codebase using natural language instructions through LLMs, applying changes directly to specified files.

Key Features:
  • Takes natural language instructions and translates them into code modifications through LLM processing
  • Integrates with Git workflow for easy version control and change inspection
  • Supports templating via liquidjs for reusable code patterns and project standards
  • Provides both interactive and non-interactive modes with customizable templates
  • Operates through simple CLI commands, requiring only an OpenAI API key and Node.js 18
  • Example use: Can incorporate project-wide coding standards through template includes, maintaining consistency across codebases
Stars: 914 ⭐
Source: https://github.com/ferrislucas/promptr

ChatGDB: LLM-Powered GDB/LLDB Debugging Assistant

A Python-based tool that integrates ChatGPT into GDB/LLDB debuggers, enabling natural language interactions for debugging commands and explanations.

Key Features:
  • Translates natural language queries into proper GDB/LLDB commands and executes them automatically
  • Provides command explanations and debugging guidance through the 'explain' command
  • Works with both GDB and LLDB debuggers for compiled languages
  • Integrates seamlessly with debugger environments through .gdbinit or .lldbinit configuration files
  • Supports Python 3.3+ and requires minimal setup through pip installation
Stars: 903 ⭐
Source: https://github.com/pgosar/ChatGDB

Hugging-Chat-API: Python Interface for HuggingChat

An unofficial Python API for interacting with HuggingChat, enabling programmatic access to chat functionalities, web search, and image generation capabilities.

Key Features:
  • Integrates with HuggingChat's core features including standard chat, context memory, and web search functionality
  • Offers both streaming and non-streaming response options for chat interactions
  • Provides conversation management capabilities, allowing creation, switching, and deletion of chat sessions
  • Includes a command-line interface with comprehensive chat management commands
  • Enables cookie-based authentication and session management
  • For example: Create an assistant for image generation or use web search during chat interactions
Stars: 894 ⭐
Source: https://github.com/Soulter/hugging-chat-api

ReverserAI: Local LLM-Powered Reverse Engineering Assistant

A Binary Ninja plugin that assists reverse engineering tasks through locally-hosted LLMs, focusing on automatic function name suggestions from decompiler output.

Key Features:
  • Operates entirely offline on local CPU/GPU to ensure data privacy and security
  • Suggests semantically meaningful function names by analyzing decompiler output and incorporating static analysis insights
  • Runs efficiently on consumer hardware with 16GB RAM and 12 CPU threads, taking 20-30 seconds per query (2-5 seconds with GPU optimization)
  • Integrates seamlessly with Binary Ninja while maintaining an architecture that can extend to IDA and Ghidra
  • Enhances accuracy by combining static analysis techniques with LLM capabilities to provide richer context
  • Supports configuration options to optimize performance based on available hardware resources and analysis needs
  • Example use: Context-aware function renaming leverages external API functions and strings to provide more accurate suggestions
Stars: 870 ⭐
Source: https://github.com/mrphrazer/reverser_ai

llama.vim: Local LLM-Powered Text Completion for Vim

A Vim plugin that provides local LLM-assisted code completion with suggestions appearing as you type, leveraging llama.cpp for efficient text generation.

Key Features:
  • Automatic text suggestions appear during cursor movement in Insert mode, with manual toggle via Ctrl+F and acceptance using Tab or Shift+Tab
  • Smart context management with a ring buffer system that includes content from open files, edited text, and yanked content
  • Efficient performance on low-end hardware through smart context reuse and configurable generation parameters
  • Supports multiple installation methods including vim-plug, Vundle, and lazy.nvim
  • Compatible with different VRAM configurations, offering recommended settings for 16GB+, 8-16GB, and sub-8GB systems
  • Example use: The green text displays performance stats showing context size, buffer information, and generation time while suggesting completions
  • Example use: Global context is maintained across different files when working in large codebases, as demonstrated with Qwen2.5-Coder 7B model on M2 Ultra
Stars: 870 ⭐
Source: https://github.com/ggml-org/llama.vim

EVAL: Elastic Versatile Agent for Autonomous Task Execution

EVAL is an autonomous agent system that executes user requests by searching, coding, running, and testing solutions independently. It leverages LLMs to understand and generate multimodal content including text, images, and dataframes.

Key Features:
  • Handles multimodal conversations by understanding and generating various data formats including text, images, and dataframes
  • Creates and manages its own tools by writing, modifying, executing and testing code autonomously
  • Includes built-in tools for terminal operations, code editing, web searches, and image manipulation using Stable Diffusion
  • Serves blocking processes like web applications and maintains a self-managed GitHub account
  • Operates through a web GUI interface or API endpoints for both synchronous and asynchronous task execution
  • Example use: Creates full-fledged web applications with multiple files
  • Example use: Builds its own user interface
Stars: 870 ⭐
Source: https://github.com/corca-ai/EVAL

PySpark-AI: Natural Language Interface for Apache Spark

A tool that enables users to interact with Apache Spark using English instructions, converting natural language queries into PySpark DataFrame operations.

Key Features:
  • Transforms English instructions into PySpark DataFrame operations automatically, making data manipulation more accessible to non-technical users
  • Offers vector similarity search capability to improve transform query accuracy through word embeddings
  • Includes visualization capabilities through a plot API that generates charts based on natural language descriptions
  • Integrates with OpenAI's GPT-4 by default, with support for Azure OpenAI services for enhanced data privacy
  • Example use: "What are the best-selling and second best-selling products in every category?" transforms into a DataFrame showing top sales rankings
  • Example use: Generate pie charts showing market share distribution for top 5 auto brands with a simple natural language command
Stars: 847 ⭐
Source: https://github.com/pyspark-ai/pyspark-ai

LLM for Unity: Unity Engine Integration for LLMs

A Unity package that enables integration of LLMs to create interactive characters with local processing capabilities and semantic search functionality using RAG.

Key Features:
  • Runs locally offline on multiple platforms (Windows, Linux, macOS, iOS, Android) with fast CPU/GPU inference
  • Creates interactive game characters using LLMs with simple integration - just one line of code needed for character responses
  • Implements RAG system for semantic search across data, enhancing characters' knowledge base
  • Includes model management system for downloading, loading and deploying LLMs in games
  • Enables saving and loading chat histories and model states for persistent character interactions
  • Offers cross-platform mobile support with options for downloading models at runtime
  • Provides structured output control through grammar-based restrictions and function calling capabilities
  • Example use: Create NPCs that can have dynamic conversations with players based on game context and knowledge base
  • Example use: Build a detective game where characters respond based on a knowledge base of clues and evidence
Stars: 846 ⭐
Source: https://github.com/undreamai/LLMUnity

llm.nvim: Code Completion Plugin for Neovim with LLM Integration

A Neovim plugin that provides code completion functionality through various LLM backends, similar to GitHub Copilot.

Key Features:
  • Ghost-text code completion that appears as you type, with configurable suggestion behavior and manual request options.
  • Supports multiple backend options including Hugging Face Inference API, Ollama, OpenAI, and Text Generation Inference.
  • Automatically manages context window size using tokenizers to ensure prompts fit within model limitations.
  • Configurable with various package managers (Packer, Lazy.nvim, Vim-plug) and offers extensive customization options for model parameters.
  • Uses llm-ls as backend server, which can be installed automatically or via Mason package manager.
Stars: 828 ⭐
Source: https://github.com/huggingface/llm.nvim

Chapyter: Natural Language to Python Code Generation in JupyterLab

A JupyterLab extension that integrates GPT-4 for translating natural language descriptions into executable Python code directly within the notebook environment.

Key Features:
  • Translates natural language into Python code and executes it automatically using the %%chat magic command
  • Incorporates coding history and execution outputs to generate context-aware code and visualizations
  • Enables real-time debugging and code editing within the familiar JupyterLab interface
  • Maintains privacy by utilizing OpenAI API, ensuring data isn't stored for training purposes
  • Provides transparent access to all prompts used in the library with customization options
  • Example use: %%chat -m gpt-4-0613 "List all the files in the folder" generates and executes Python code for file listing
  • Example use: With the --history flag, generates appropriate visualizations for datasets based on previous execution context
Stars: 824 ⭐
Source: https://github.com/chapyter/chapyter

vim-ai: AI-Enhanced Text and Code Generation for Vim/Neovim

A Vim/Neovim plugin that integrates OpenAI's API for code generation, text editing, and interactive chat capabilities directly within the editor.

Key Features:
  • Text and code generation with in-place editing through simple commands like :AI and :AIEdit
  • Interactive chat interface with ChatGPT through :AIChat command, with ability to save and restore conversations in .aichat files
  • Vision capabilities for image-to-text conversion and DALL-E image generation
  • Customizable roles system for reusable AI instructions and configurations
  • Installation requires only Vim/Neovim with Python3 support and an OpenAI API key
  • Compatible with OpenAI-compatible APIs and services like OpenRouter for using alternative LLMs
Stars: 805 ⭐
Source: https://github.com/madox2/vim-ai

ChatGPT-i18n: AI-Assisted Locale File Translation

A web application that streamlines the translation of locale files using LLM capabilities, designed to handle large content while maintaining JSON structure integrity.

Key Features:
  • Processes large JSON files by breaking them into manageable chunks for accurate translation
  • Provides a web-based editor interface for viewing and modifying translations
  • Enables simultaneous export of multiple locale files
  • Maintains JSON structure during translation, preventing common formatting issues
  • Available as a hosted service on Vercel or can be self-deployed with your OpenAI API key
Stars: 803 ⭐
Source: https://github.com/ObservedObserver/chatgpt-i18n

ht (Headless Terminal): Programmatic Terminal Interface Control

A command-line tool that provides a VT100-style terminal interface wrapper for other programs, enabling programmatic access through JSON over STDIN/STDOUT.

Key Features:
  • Creates a virtual terminal environment for running programs like bash, vim, or other CLI tools, making them accessible for programmatic interaction
  • Provides JSON-based API through STDIO and WebSocket interfaces for terminal control and monitoring
  • Offers built-in HTTP server with live terminal preview capabilities
  • Supports comprehensive key input simulation, including special keys and modifier combinations
  • Enables terminal resizing and snapshot capabilities for state monitoring
  • Particularly useful for LLM agents to interact with terminal-based interfaces like human users would
Stars: 790 ⭐
Source: https://github.com/andyk/ht

Cali: AI Agent for React Native Development

A terminal-based agent that simplifies React Native app development by providing LLM-powered assistance for common development tasks and commands.

Key Features:
  • Runs React Native commands and automates build processes for iOS and Android platforms
  • Manages connected Android and iOS devices and simulators without requiring manual command memorization
  • Handles dependency management for both npm packages and CocoaPods
  • Integrates with React Native Directory to search and list available libraries
  • Offers three deployment options: standalone terminal agent, Vercel AI SDK integration, or MCP server for Claude compatibility
Stars: 769 ⭐
Source: https://github.com/callstackincubator/cali

InfiniteGPT: Unlimited Text Input for OpenAI API

A Python script that enables users to input text of any length into OpenAI's API, eliminating text size limitations and manual chunking requirements.

Key Features:
  • Handles unlimited text input size for OpenAI API interactions
  • Eliminates manual text chunking and multiple prompts for large texts
  • Functions through a single Python script requiring only Python3 and OpenAI dependencies
  • Requires users to provide their own OpenAI API keys for operation
Stars: 754 ⭐
Source: https://github.com/emmethalm/infiniteGPT

pg_vectorize: Vector Search and RAG Extension for PostgreSQL

A PostgreSQL extension that automates text-to-embedding transformation and provides RAG capabilities through integration with LLMs and vector similarity search.

Key Features:
  • Automates creation and maintenance of vector embeddings from text data in PostgreSQL tables
  • Provides high-level API with just two main functions - one to initialize embeddings and another to search
  • Supports both vector search and RAG workflows with background workers for automated embedding updates
  • Creates PostgreSQL triggers to keep embeddings synchronized with source data in real-time
  • Integrates with OpenAI embeddings/text generation and Hugging Face Sentence-Transformers
  • Example use: Semantic search in a products database by converting product names and descriptions into searchable vectors
  • Example use: Creating a RAG system to answer natural language questions about product data using OpenAI or Ollama models
Stars: 754 ⭐
Source: https://github.com/tembo-io/pg_vectorize

ChatGPT CLI: Terminal-Based Interface for OpenAI's GPT Models

A command-line interface that enables direct interaction with GPT-3.5 and GPT-4 through the terminal, offering both chat and pipeline functionalities.

Key Features:
  • Operates in both interactive chat mode and pipeline mode for text processing
  • Configurable prompts system with ability to switch between predefined or custom prompts
  • Comprehensive keyboard shortcuts for navigation, text editing, and conversation management
  • Flexible configuration options including model parameters, API endpoints, and conversation history management
  • Supports Azure OpenAI service integration with customizable model mappings
  • Stores conversation history locally and allows context length customization
Stars: 749 ⭐
Source: https://github.com/j178/chatgpt

OpenAI Unity Package: Integration of OpenAI API in Unity

A Unity package enabling direct integration of OpenAI's API functionalities within the Unity game engine, supporting chat completions and image generation.

Key Features:
  • Seamless integration through Unity's Package Manager, compatible with Unity 2019 and later versions
  • Secure API key management through local storage system to protect credentials
  • Supports both regular and streaming responses for chat completions
  • Includes ready-to-use sample scenes for ChatGPT-like chat and DALL-E image generation
  • Cross-platform compatibility with primary support for Windows and varying support levels for other platforms
  • Example use: Create chat completions with GPT-3.5-turbo by sending messages and receiving responses through async methods
  • Example use: Generate streamed responses for longer content like stories, with real-time text updates
Stars: 747 ⭐
Source: https://github.com/srcnalt/OpenAI-Unity

Word GPT Plus: LLM Integration for Microsoft Word

A Microsoft Word add-in that integrates various LLM models to assist with text generation, translation, summarization, and document creation.

Key Features:
  • Seamlessly integrates with Microsoft Word 2016/2019/2021 and Microsoft 365, working directly within document interface
  • Includes built-in prompts for common tasks like translation, summarization, polishing, and academic writing
  • Supports multiple LLM platforms including OpenAI, Azure OpenAI, Ollama2, Google Gemini Pro, and Groq
  • Allows custom prompts that can be saved for future use
  • Offers flexible deployment options through hosted service, Docker container, or self-hosting
  • Provides customizable settings for temperature and max tokens to control output
  • Compatible with multiple languages and includes proxy support for global accessibility
Stars: 743 ⭐
Source: https://github.com/Kuingsmile/word-GPT-Plus

GPT-JSON: Type-Safe JSON Output from LLM Responses

A Python library that provides structured, type-safe JSON responses from LLMs using Pydantic schemas and templated prompts.

Key Features:
  • Uses Pydantic schemas for type validation and casting, ensuring consistent output format from LLM responses
  • Handles prompt templating with dynamic variables and custom field descriptions
  • Implements automatic JSON repair for common issues like truncation and boolean value formatting
  • Supports function calling with GPT models, including parameter validation and error handling
  • Integrates with both single-object and list responses through type hinting
Stars: 735 ⭐
Source: https://github.com/piercefreeman/gpt-json

dingllm.nvim: Lightweight LLM Integration for Neovim

A minimalist Neovim plugin that integrates LLM capabilities using Plenary events instead of timed async loops, offering improved stability over similar solutions.

Key Features:
  • Uses Plenary events system to prevent editor deadlocks that can occur with async loops
  • Supports multiple LLM providers including Groq, Lambda Labs, Anthropic, and OpenRouter
  • Provides both code replacement and help functionality through simple keybindings
  • Implements stream-based response handling for real-time LLM interactions within the editor
  • Configurable system prompts for code modification and assistance modes
Stars: 734 ⭐
Source: https://github.com/yacineMTB/dingllm.nvim

Magic CLI: Command Line Enhancement with LLMs

A command line utility that enhances terminal usage through LLM-powered command suggestions, history search, and task automation.

Key Features:
  • Suggests commands with precise arguments and syntax, particularly useful for complex tools like ffmpeg or kubectl
  • Performs semantic search across shell history to find relevant previous commands
  • Generates commands based on natural language task descriptions, with interactive prompting for additional context
  • Configurable command execution modes including clipboard copying and direct shell execution
  • Stores configurations and sensitive data in the user's home directory, with planned secure key storage implementation
  • Example use: "magic-cli suggest 'Resize test_image.png to 300x300 with ffmpeg'" helps generate the exact ffmpeg command with correct parameters
  • Example use: "magic-cli search 'zellij attach'" finds semantically similar commands from your shell history
Stars: 734 ⭐
Source: https://github.com/guywaldman/magic-cli

Yai: LLM-Powered Terminal Assistant

A command-line tool that uses ChatGPT to interpret natural language requests and convert them into executable terminal commands.

Key Features:
  • Translates natural language descriptions into executable terminal commands, eliminating the need to memorize complex command syntax
  • Automatically detects and incorporates system information including OS, distribution, username, shell, and home directory into its responses
  • Provides general knowledge responses to queries directly in the terminal
  • Simple installation through a single curl command and straightforward configuration using OpenAI API key
  • Enables customization through user preferences to enhance the interaction experience
Stars: 732 ⭐
Source: https://github.com/ekkinox/yai

Code2Prompt: Codebase-to-LLM Prompt Generator

A command-line tool that converts codebases into well-structured prompts for LLMs, enabling comprehensive code analysis, documentation, and improvement tasks.

Key Features:
  • Generates structured Markdown prompts that capture the project's context and hierarchy through intelligent source tree creation
  • Customizable output using Jinja2 templates with support for dynamic variables and template imports
  • Smart filtering capabilities using glob patterns and .gitignore integration to control which files are included
  • Features token counting and optimization to ensure compatibility with LLM token limits
  • Supports custom syntax highlighting mappings for different file extensions
  • Interactive mode allows visual selection of files to process
  • Includes price estimation across various LLM providers based on token counts
  • Example use: Generate prompts for code review by running "code2prompt --path /path/to/project --filter "*.js,*.ts" --exclude "node_modules/*,dist/*" --template code_review.j2"
  • Example use: Create documentation with "code2prompt --path /path/to/library --output library_docs.md --suppress-comments --line-number --filter "*.py""
Stars: 730 ⭐
Source: https://github.com/raphaelmansuy/code2prompt

Org-AI: LLM Integration for Emacs Org-Mode

A minor mode for Emacs org-mode that enables interaction with LLMs and image generation models through OpenAI API and Stable Diffusion.

Key Features:
  • Integrates directly into org-mode buffers using special blocks for text generation and conversations
  • Supports speech input/output capabilities for voice interactions
  • Generates images and image variations using DALL-E or Stable Diffusion through text prompts
  • Provides global commands for prompting using selected text or multiple files outside org-mode
  • Includes syntax highlighting, auto-fill paragraphs, and customizable block options for output formatting
  • Offers noweb support for referencing other parts of the document within prompts
Stars: 728 ⭐
Source: https://github.com/rksm/org-ai

Emacs Copilot: Code Completion with Local LLMs in Emacs

A lightweight implementation for code completion within Emacs that uses local LLMs to generate code suggestions, featuring file-based context memory and streaming completions.

Key Features:
  • Generates code completions based on file context using only 100 lines of LISP code
  • Maintains editing history on a per-file basis, allowing the LLM to recall previous code context
  • Streams tokens into the buffer in real-time with interrupt capability via C-g
  • Automatically removes deleted code from the LLM's context when matching text is removed from the buffer
  • Language-agnostic functionality determined by file extensions
  • Works with various LLM sizes to accommodate different hardware capabilities, from powerful GPUs to CPU-only systems
  • Example use: After typing "bool is_prime(int x) {" and pressing C-c C-k, the LLM generates the complete function implementation
  • Example use: When writing a main() function after defining is_prime(), the LLM remembers the context and generates code to print prime numbers
Stars: 727 ⭐
Source: https://github.com/jart/emacs-copilot

AI Code Reviewer: Automated Pull Request Reviews with GPT-4

A GitHub Action that utilizes OpenAI's GPT-4 to analyze pull requests and provide code improvement suggestions, streamlining the code review process.

Key Features:
  • Automatically analyzes pull request diffs and generates review comments with suggestions
  • Integrates seamlessly with GitHub workflows through a simple YAML configuration
  • Supports file exclusion patterns to customize review scope
  • Requires only OpenAI API key and basic GitHub setup for implementation
  • Reviews code using GPT-4 capabilities through the OpenAI API
Stars: 726 ⭐
Source: https://github.com/aidar-freeed/ai-codereviewer

srgn: Code Surgery with Syntax Understanding

A grep-like tool that understands source code syntax and enables both search and manipulation capabilities, with support for regular expressions and language-aware operations.

Key Features:
  • Provides both regex-based and language grammar-aware scoping to precisely target code elements
  • Supports multiple manipulation actions including replacement, deletion, case changes, and character squeezing
  • Processes files recursively with parallel execution, handling over 140,000 matches in large codebases within seconds
  • Includes prepared queries for common operations in supported languages (Python, Go, Rust, TypeScript, C#, C, HCL)
  • Functions as both a CLI tool and Rust library with comprehensive API documentation
  • For example, you can transform Python prints to logging: `cat money.py | srgn --python 'function-calls' '^print$' 'logging.info'`
  • For example, you can find all unsafe Rust code: `cat unsafe.rs | srgn --rs 'unsafe'`
Stars: 712 ⭐
Source: https://github.com/alexpovel/srgn

Cursor Enhancement: Advanced Agentic Capabilities for Cursor/Windsurf IDE

A configuration package that transforms Cursor or Windsurf IDE to provide advanced agentic capabilities similar to Devin, enabling automated planning, tool usage, and self-evolution features.

Key Features:
  • Automated planning and self-evolution capabilities enable the system to plan actions and learn from mistakes
  • Extended toolset integration includes web scraping via Playwright, DuckDuckGo search, and text/image analysis
  • Experimental multi-agent system splits tasks between a high-level planner (o1) and executor (Claude/GPT) for improved solution quality
  • Self-learning mechanism updates .cursorrules based on corrections, accumulating project-specific knowledge over time
  • Quick setup process requires only copying config files into the project folder and setting up API keys
Stars: 680 ⭐
Source: https://github.com/grapeot/devin.cursorrules

ChatGPT Plugins Collection: Community Plugin Repository

A collaborative repository hosting community-created plugins for ChatGPT across multiple programming languages including Python, Julia, and JavaScript.

Key Features:
  • Hosts various plugins including note-taking, todo lists, and specialized tools like animal tracking and number calculations
  • Supports multiple programming languages and frameworks including Python (Quart, FastAPI), Julia (HTTP.jl), and JavaScript (Express.js)
  • Open for community contributions through a structured submission process
  • Each plugin maintains user-specific data association for personalized functionality
  • Example use: Create and manage personal notes through NotesGPT plugin
  • Example use: Manage todo items across different programming language implementations
Stars: 676 ⭐
Source: https://github.com/logankilpatrick/ChatGPT-Plugins-Collection

Raycast-G4F: Free LLM Access for Raycast on macOS

A Raycast extension enabling access to various LLMs through multiple providers, offering features like chat, web search, and file handling, all at no cost.

Key Features:
  • Streams responses in real-time with 18 different commands for various use cases
  • Performs web searches through DuckDuckGo to provide up-to-date information when needed
  • Supports file uploads including image, video, audio, and text files (provider-dependent)
  • Includes image generation capabilities using state-of-the-art models
  • Allows creation of custom commands with personalized prompts
  • Features automatic chat naming and persistent storage options
  • Provides code interpreter functionality for executing Python code locally (beta feature)
  • Integrates with multiple providers including Nexra, DeepInfra, Google Gemini, and custom OpenAI-compatible APIs
Stars: 673 ⭐
Source: https://github.com/XInTheDark/raycast-g4f

GPT-CLI: Terminal Interface for LLM Chat

A command-line tool for interacting with various LLM providers through a terminal interface, offering customizable assistants and configuration options.

Key Features:
  • Terminal-based interaction with extensive keyboard shortcuts and multi-line input support
  • Usage tracking displaying token count and price information
  • Customizable assistants with configurable parameters like temperature and top_p values
  • Markdown formatting support with toggle options
  • YAML-based configuration system allowing for custom assistants and predefined message contexts
  • Command execution capability, particularly useful with the bash assistant for shell commands
  • File inclusion feature for adding external context to assistants using !include directive
  • Flexible API configuration supporting custom endpoints and multiple providers like OpenAI, Anthropic, Google Gemini, and Cohere
  • Multi-assistant support with easy switching between general, development, and custom-defined assistants
Stars: 643 ⭐
Source: https://github.com/kharvd/gpt-cli

Lumen: Git Workflow Enhancement with LLMs

A command-line tool that streamlines git workflows by automating commit message generation and providing change explanations, requiring no API key by default.

Key Features:
  • Generates conventional commit messages based on staged changes, with support for additional context input
  • Analyzes and explains changes across commits, branches, or working directories with specific querying capabilities
  • Includes interactive commit search using fuzzy finder functionality
  • Works immediately without configuration using Phind as the default provider
  • Supports markdown formatting for readable explanations and diffs
  • Configurable through multiple methods including CLI flags, config files, and environment variables
Stars: 640 ⭐
Source: https://github.com/jnsahaj/lumen

Ellama: LLM Assistant for Emacs

An Emacs package for interacting with LLMs that enables tasks like translation, code review, summarization, and grammar enhancement through the Emacs interface with native streaming output support.

Key Features:
  • Functions as a complete chat interface inside Emacs, allowing continuous conversations with LLMs
  • Provides specialized commands for code-related tasks including code review, completion, improvement, and commit message generation
  • Offers text manipulation capabilities like translation, summarization, grammar improvement, and word definitions
  • Implements context management through files, buffers, or selected regions to provide more relevant responses
  • Features session management allowing users to save, load, rename, and switch between different chat sessions
  • Installation requires only a single package-install command in Emacs, with default configuration using Ollama and the Zephyr model
  • Solves complex reasoning problems using the Abstraction of Thought technique, improving results even with smaller models
Stars: 637 ⭐
Source: https://github.com/s-kostyaev/ellama

Automata: Self-Programming AI System

An autonomous programming system that combines LLMs with vector databases to document, search, and write code. It aims to evolve real-time capabilities through self-learning and code generation.

Key Features:
  • Integrates LLMs with vector databases to form a comprehensive code understanding and generation system
  • Uses SCIP indices to create code graphs that relate symbols by dependencies across codebases
  • Generates and maintains code embeddings and documentation for improved code understanding and search capabilities
  • Provides tools for autonomous code generation and refinement through sophisticated agent architecture
  • Supports both local installation through Poetry and containerized deployment via Docker
Stars: 629 ⭐
Source: https://github.com/emrgnt-cmplxty/automata

an-codeAI: Code Generation from Visual and Text Inputs

A tool that generates web and native code by processing screenshots, drawings, and text inputs using LLMs and computer vision technology.

Key Features:
  • Generates code for multiple frameworks including React, Vue, and React Native, with support for Tailwind CSS and shadcn/ui components
  • Creates code from visual inputs like screenshots and whiteboard drawings using GPT-4 Vision
  • Available through web interface or local installation with customizable API settings
  • Integrates with multiple providers including OpenAI's GPT-4 Vision and Google's Gemini
  • Example use: Converting whiteboard sketches into functional React components with shadcn/ui styling, as demonstrated in the provided comparison images
Stars: 620 ⭐
Source: https://github.com/sparrow-js/an-codeAI

ChatGPT CLI: Command-line Interface for LLM Interactions

A command-line tool offering seamless interaction with various LLM models through streaming capabilities and extensive configuration options, supporting OpenAI ChatGPT, Azure OpenAI Service, Perplexity AI and Llama.

Key Features:
  • Supports multiple interaction modes including streaming, query, and interactive with conversational capabilities
  • Thread-based context management with sliding window history for preserving relevant conversation context while staying within token limits
  • Custom context input from any source including files, standard input, or program output
  • Built-in support for image processing through uploads or URLs, with direct piping capability
  • Advanced configuration system with layered settings through default values, config.yaml file, and environment variables
  • Prompt file support for providing detailed context or instructions for conversations
Stars: 618 ⭐
Source: https://github.com/kardolus/chatgpt-cli

Autopilot: LLM-Powered Code Implementation Assistant

A development tool that reads codebases, maintains context, and implements requested coding tasks using GPT models.

Key Features:
  • Generates and maintains a metadata database of codebase files to make informed decisions about which files need modification
  • Processes existing files to understand context and implement requested changes across single or multiple files
  • Executes parallel agent calls where possible and provides interactive mode with retry, continue, and abort options
  • Handles references to specific files, functions, and business concepts within the codebase
  • Available as a GitHub app for automatic issue resolution and pull request handling through codeautopilot.com
  • Operates locally through Node.js with customizable file extension and ignore list settings
  • Example use: Create new files based on existing ones and update multiple existing files simultaneously while maintaining project context
Stars: 618 ⭐
Source: https://github.com/fjrdomingues/autopilot

Flexpilot: Open-Source Alternative to GitHub Copilot for VS Code

A VS Code extension providing LLM-powered development features with the flexibility to use various AI providers through native integration.

Key Features:
  • Native VS Code integration offering code completions, inline suggestions, and contextual assistance
  • Multiple chat interfaces including panel chat, inline chat, quick chat, and voice chat for different interaction styles
  • Smart Variables feature references elements from code and editor data for more relevant assistance
  • Generates commit messages and PR descriptions automatically based on code changes
  • Token usage tracking provides transparency on resource consumption
  • Installation through VS Code Marketplace with simple configuration of preferred providers
Stars: 614 ⭐
Source: https://github.com/flexpilot-ai/vscode-extension

HolmesGPT: AI-Powered Incident Response and Cloud Alert Analysis

A tool that helps on-call engineers investigate and resolve cloud alerts faster by automatically collecting relevant observability data and providing AI-driven analysis.

Key Features:
  • Automatically gathers observability data from multiple sources like Prometheus, PagerDuty, OpsGenie, and Jira
  • Secure, read-only access while respecting RBAC permissions
  • Integrates custom runbooks and knowledge bases to automate investigations according to team practices
  • Available through multiple interfaces including CLI, Slack, UI, and as a K9s plugin
  • Extensions framework allows adding custom data sources and investigation tools
  • Example use: Analyze unhealthy Kubernetes pods with the command "holmes ask 'what pods are in crashloopbackoff in my cluster and why?'"
  • Example use: Investigate OpsGenie alerts by running "holmes investigate opsgenie" with appropriate API keys to get automated analysis and recommendations
Stars: 612 ⭐
Source: https://github.com/robusta-dev/holmesgpt

Rubberduck: AI Programming Assistant for VS Code

A VS Code extension that provides LLM-powered coding assistance directly in the editor's sidebar, enabling code generation, editing, explanation, testing, and bug detection.

Key Features:
  • Interactive chat interface that understands code context and editor selections
  • Generates new code blocks and modifies existing code based on natural language instructions
  • Explains selected code sections and helps diagnose compiler/linter errors
  • Creates test cases automatically for selected code
  • Identifies potential bugs and suggests fixes
  • Supports custom conversation templates for personalized interactions
Stars: 612 ⭐
Source: https://github.com/rubberduck-ai/rubberduck-vscode

Auto Playwright: LLM-Powered Playwright Test Automation

A Node.js library that enables automated testing using natural language commands processed by LLMs to control Playwright browser automation.

Key Features:
  • Executes browser automation tasks through simple plain-text prompts, converting them into Playwright commands
  • Performs three types of operations: actions (clicking, typing), queries (extracting data), and assertions (validating states)
  • Uses HTML sanitization to optimize LLM interactions and reduce API costs
  • Integrates with OpenAI and Azure OpenAI APIs for processing natural language commands
  • Installation requires only npm installation and setting up an OpenAI API key
  • Example use: const headerText = await auto("get the header text", { page, test })
  • Example use: await auto(`Type "${headerText}" in the search box`, { page, test })
Stars: 604 ⭐
Source: https://github.com/lucgagan/auto-playwright

GPT-Macro: LLM-Powered Rust Code Generation at Compile-Time

A Rust procedural macro system that uses ChatGPT to generate code during compilation, enabling automatic implementation of functions and test cases based on natural language prompts.

Key Features:
  • Uses the auto_impl!{} macro to generate function implementations from text prompts and incomplete code structures
  • Provides an auto_test attribute macro for automatic test case generation
  • Integrates with the Rust compilation process to expand macros before compilation
  • Requires only an OpenAI API key to function
Stars: 599 ⭐
Source: https://github.com/retrage/gpt-macro

IPython ChatGPT Extension: ChatGPT Integration for Jupyter and IPython

A standalone extension enabling direct interaction with ChatGPT from Jupyter Notebooks or IPython Shell, requiring no external dependencies.

Key Features:
  • Integrates seamlessly with Jupyter Notebooks and IPython Shell through the %%chat command
  • Maintains conversation context across multiple queries, with option to reset using --reset-conversation flag
  • Configurable system message to define the assistant's role and behavior
  • Supports global configuration settings through %chat_config command for model selection and default parameters
  • Installation requires only pip and an OpenAI API key
  • Example use: Set global pandas display options by asking "How can I avoid pandas using scientific notation in outputs, and do it globally?"
Stars: 595 ⭐
Source: https://github.com/santiagobasulto/ipython-gpt

Seed by Snaplet: Automated Database Seeding for Development and Testing

Generate production-like dummy data automatically based on database schema, with support for relationships and type-safe definitions.

Key Features:
  • Automatically determines and generates appropriate values for database fields, including built-in data like country lists and currency codes
  • Creates relational entities without manual ID tracking or explicit relationship definitions
  • Provides TypeScript client with type safety and rich documentation for defining data values
  • Uses Copycat for deterministic data generation, ensuring consistent outputs for testing and development
  • Integrates with LLMs to generate realistic text-based entries, storing examples in a modifiable JSON file
  • Example use: Create blog posts with automated relationships: seed.posts([{title: "Why you need Seed", author: {email: "snappy@snaplet.dev"}, comments: (x) => x(3)}])
Stars: 584 ⭐
Source: https://github.com/supabase-community/seed

CodeShell VSCode: LLM-Powered Coding Assistant Extension

A VSCode extension that integrates CodeShell LLM to provide intelligent coding assistance across multiple programming languages including Python, Java, C++/C, JavaScript, and Go.

Key Features:
  • Offers code completion with both automatic triggering (1-3 second delay) and manual activation via keyboard shortcuts (Alt+\ or option+\)
  • Provides code assistance features including code explanation, optimization, cleanup, comment generation, and unit test creation
  • Performs code analysis for performance and security issues
  • Includes an interactive chat interface supporting multi-turn conversations with history tracking and the ability to edit/reask questions
  • Compatible with different deployment options - can run on CPU using llama.cpp or GPU using Text Generation Inference (TGI)
  • Requires Node v18+, VSCode 1.68.1+, and a running CodeShell model service
Stars: 577 ⭐
Source: https://github.com/WisdomShell/codeshell-vscode

Prompts Royale: Competitive Prompt Engineering Platform

A platform for optimizing and testing prompt engineering through competitive evaluation, allowing users to generate, test, and rank different prompts against test cases to find the most effective version.

Key Features:
  • Creates multiple prompt candidates from user descriptions and test scenarios, while allowing manual prompt input
  • Generates test cases automatically to evaluate prompt effectiveness
  • Uses Monte Carlo matchmaking and ELO rating system to efficiently rank prompts based on their performance
  • Stores all data locally with browser-based API requests to LLMs for enhanced security
  • Operates through a statistical approach where prompts battle each other, using normal distribution with initial mean of 1000 and standard deviation of 350
  • Available via web interface at promptsroyale.com or through local installation using Node v16+
  • Example use: Creating website headlines - input website type (e.g., "car dealership") and expected output ("Find the car of your dreams at the best price"), then generate and test multiple prompt variations to find the most effective one
Stars: 569 ⭐
Source: https://github.com/meistrari/prompts-royale

Code Llama for VSCode: Local Code Llama Integration with VSCode

A cross-platform solution that connects Code Llama with VSCode through the Continue extension, enabling local usage without API keys or external services.

Key Features:
  • Provides seamless integration between Code Llama and VSCode through a mock Llama.cpp API
  • Works across Windows, Linux, and other platforms where Meta's codellama can run
  • Requires only the Code Llama Instruct model and Continue VSCode extension to function
  • Setup involves running a Python script with Flask to create the connection between Continue and Code Llama
Stars: 558 ⭐
Source: https://github.com/xNul/code-llama-for-vscode

Oatmeal: Terminal UI Chat Interface for LLMs

A terminal-based chat application with fancy chat bubbles and slash commands, enabling conversations with LLMs through various backends while offering integration with editors like Neovim.

Key Features:
  • Integrates with multiple backends including OpenAI, Ollama, LangChain, Claude, and Gemini, while keeping local privacy options available
  • Persistent chat sessions allowing users to review past conversations or continue where they left off
  • Code syntax highlighting with customizable themes, supporting both built-in and custom TextMate themes
  • Editor integration capabilities, particularly with Neovim, allowing seamless code transfer between chat and editor
  • Comprehensive slash commands and hotkeys for efficient chat management and code handling
  • Session management tools to list, open, and delete previous conversations
Stars: 555 ⭐
Source: https://github.com/dustinblackman/oatmeal

RepoToText: GitHub Repository to Text File Converter

A web application that converts GitHub repositories into organized text files, enabling seamless interaction with repository content through LLMs. The tool can include documentation and allows selective file type conversion.

Key Features:
  • Converts entire GitHub repositories into a single structured text file, with files separated by distinct markers and file paths
  • Includes a FolderToText feature for converting local folders and files into the same formatted text output
  • Offers optional documentation URL integration, appending documentation content to the beginning of the output file
  • Built with Docker support for easy deployment and runs through a React frontend with Python Flask backend
  • Saves output files in a /data folder with user, repository, and timestamp information
  • Provides file type filtering to select specific file extensions for conversion
  • Example use: Creating a React frontend for an existing backend repository by feeding the repository content to an LLM for analysis and development guidance
Stars: 552 ⭐
Source: https://github.com/JeremiahPetersen/RepoToText

TestPilot: Automated Unit Test Generation for JavaScript/TypeScript

An experimental tool that generates unit tests for npm packages by leveraging LLMs to analyze function signatures, code bodies, and documentation examples.

Key Features:
  • Generates tests by providing the LLM with function information embedded in code comments and test skeletons, without requiring additional training data
  • Includes test refinement capabilities - if a generated test fails, the LLM receives feedback and can improve the test
  • Functions with any Codex-style LLM completion API by configuring endpoint URLs and authentication headers
  • Offers a benchmarking system to evaluate test generation across multiple packages, with result analysis via CodeQL queries
  • Supports reproduction mode to replay previous test generation runs without requiring LLM API access
Stars: 535 ⭐
Source: https://github.com/githubnext/testpilot

Buildware: Automated Code PR Generation System

A tool that automatically generates pull requests by processing code issues using LLMs.

Key Features:
  • Transforms code issues into complete pull requests automatically using LLMs
  • Integrates with GitHub through Personal Access Tokens for repository access and PR management
  • Uses PostgreSQL database for data persistence, compatible with Supabase or Neon
  • Offers simple deployment through Vercel with a one-click setup option
  • Planned features include Linear integration, local codebase mode, and team support capabilities
Stars: 525 ⭐
Source: https://github.com/mckaywrigley/buildware-ai

BetterOCR: Enhanced Text Detection with Multiple OCR Engines and LLM

A Python library that improves OCR accuracy by combining results from multiple OCR engines (EasyOCR, Tesseract, and Pororo) with LLM processing to correct and reconstruct the output.

Key Features:
  • Integrates multiple OCR engines for better text detection accuracy, particularly useful for non-English languages
  • Uses LLM to improve and correct OCR results, reducing noise and enhancing accuracy
  • Supports custom context input for better handling of proper nouns and product names
  • Provides box detection functionality to identify and locate text regions in images
  • Installation is straightforward via pip install betterocr
Stars: 514 ⭐
Source: https://github.com/junhoyeo/BetterOCR