GPU Acceleration
        
        Overview
        GPU acceleration dramatically improves Tag-AI's performance by using your computer's graphics card to speed
            up AI model calculations. This is particularly important for local LLaVA processing.
        Without GPU acceleration, image processing can be 5-10x slower, making large libraries impractical to process
            in a reasonable timeframe.
        Supported GPUs
        NVIDIA GPUs
        NVIDIA GPUs are supported through CUDA:
        
            - Minimum: GeForce GTX 1060 6GB or equivalent
 
            - Recommended: GeForce RTX 2060 or better
 
            - Optimal: RTX 3060 Ti / 3070 / 3080 / 4060 / 4070 / 4080 / 4090
 
            - CUDA Version: 11.8 or newer (12.4 recommended)
 
        
        AMD GPUs
        AMD GPUs are supported through ROCm:
        
            - Minimum: Radeon RX 5500 or equivalent
 
            - Recommended: Radeon RX 6600 or better
 
            - Optimal: RX 6700 XT / 6800 / 6800 XT / 6900 XT / 7600 / 7700 XT / 7800 XT / 7900 XTX
            
 
            - ROCm Version: 6.0 or newer
 
        
        Apple Silicon
        Apple Silicon Macs (M1/M2/M3) have built-in GPU acceleration through the Metal API:
        
            - Supported: All Apple Silicon Macs (M1, M1 Pro, M1 Max, M1 Ultra, M2, M2 Pro, M2 Max, M2
                Ultra, M3, M3 Pro, M3 Max, M3 Ultra)
 
            - No setup required: Metal integration is automatic
 
        
        
            Intel Macs with discrete GPUs have limited acceleration support. Performance will vary based on the
                specific GPU model.
         
        NVIDIA GPU Setup
        Automatic Setup
        The Tag-AI setup wizard automatically detects NVIDIA GPUs and installs CUDA:
        
            - The wizard detects your NVIDIA GPU
 
            - It downloads and launches the CUDA installer
 
            - Follow the on-screen prompts in the CUDA installer
 
            - After installation, the wizard will verify CUDA is working
 
        
        Manual CUDA Installation
        If you need to install CUDA manually:
        
            - Download CUDA 12.4 from NVIDIA's
                    website
 
            - Run the installer with administrator privileges
 
            - Follow the installation prompts
 
            - Restart your computer after installation
 
            - Tag-AI will automatically detect CUDA on next launch
 
        
        NVIDIA Driver Updates
        For optimal performance, ensure your NVIDIA drivers are up to date:
        
            - Visit NVIDIA's driver page
 
            - Select your GPU model and operating system
 
            - Download and install the latest driver
 
        
        AMD GPU Setup
        Windows
        For AMD GPUs on Windows:
        
            - Ensure your AMD drivers are up to date using AMD Adrenalin software
 
            - The setup wizard will guide you through ROCm installation
 
            - Follow on-screen instructions for downloading and installing ROCm
 
            - Restart your computer after installation
 
        
        Linux
        For AMD GPUs on Linux:
        
            - The setup wizard will provide installation commands for ROCm
 
            - Follow distribution-specific instructions (Ubuntu, Fedora, etc.)
 
            - After installation, verify ROCm is properly set up
 
        
        
            AMD GPU support on Linux requires ROCm-compatible hardware and may need additional configuration on some
                distributions.
         
        Apple Silicon
        Apple Silicon Macs don't require additional setup for GPU acceleration. The Metal API is used automatically:
        
        
            - No driver installation required
 
            - No configuration changes needed
 
            - Works out of the box on all Apple Silicon Macs
 
            - Performance scales with the GPU core count (better on Pro/Max/Ultra models)
 
        
        
            While Apple Silicon provides good performance, it's typically not as fast as high-end NVIDIA or AMD GPUs
                for AI workloads.
         
        Checking Acceleration Status
        GPU Detection at Launch
        When Tag-AI starts, it automatically detects and uses available GPU acceleration. You can verify this in
            several ways:
        Setup Information
        In the config.ini file:
        
            gpu_type - Shows "NVIDIA", "AMD", or "Mac" 
            gpu_acceleration - Set to "true" if acceleration is enabled 
        
        Processing Speed
        The most obvious indicator is processing speed:
        
            - GPU Acceleration On: ~5-10 images per minute
 
            - CPU-Only Mode: ~0.5-1 images per minute
 
        
        
        Processing Speeds
        Approximate images processed per minute with LLaVA model:
        
            
                | Hardware | 
                Images Per Minute | 
            
            
                | CPU Only (8-core) | 
                0.5 - 1 | 
            
            
                | GTX 1060 6GB | 
                3 - 5 | 
            
            
                | RTX 3060 | 
                6 - 8 | 
            
            
                | RTX 4070 | 
                8 - 12 | 
            
            
                | Radeon RX 6600 | 
                4 - 6 | 
            
            
                | Radeon RX 6800 XT | 
                6 - 9 | 
            
            
                | Apple M1 | 
                2 - 4 | 
            
            
                | Apple M2 Pro | 
                4 - 6 | 
            
            
                | Apple M3 Max | 
                6 - 8 | 
            
        
        Memory Requirements
        GPU memory usage varies based on model:
        
            - LLaVA standard: ~4-6GB VRAM
 
            - Larger models (if configured): 8GB+ VRAM
 
        
        Optimization Features
        Tag-AI includes several features to optimize GPU performance:
        
            - Adaptive batch sizing: Adjusts for your GPU's capabilities
 
            - GPU utilization monitoring: Allocates resources efficiently
 
            - Image pre-processing: Resizes images to optimal dimensions
 
            - Memory management: Releases resources between batches
 
        
        Troubleshooting
        NVIDIA Issues
        
            - CUDA not detected:
                
                    - Verify CUDA installation with 
nvcc --version in Command Prompt/Terminal 
                    - Update to the latest NVIDIA drivers
 
                    - Try reinstalling CUDA 12.4
 
                
             
            - Out of memory errors:
                
                    - Close other GPU-intensive applications
 
                    - Reduce batch size in configuration
 
                    - Use a GPU with more VRAM
 
                
             
        
        AMD Issues
        
            - ROCm not detected:
                
                    - Check if your GPU is supported by ROCm
 
                    - Update AMD drivers to latest version
 
                    - On Linux, verify ROCm installation with 
rocminfo 
                
             
            - Slow performance:
                
                    - Verify ROCm is actually being used
 
                    - Check for thermal throttling
 
                    - Update to latest ROCm version
 
                
             
        
        Apple Silicon Issues
        
            - Performance lower than expected:
                
                    - Check if other applications are using GPU resources
 
                    - Ensure your Mac isn't in low power mode
 
                    - Verify you're running the latest macOS version
 
                
             
        
        General Troubleshooting
        
            - Restart Tag-AI after installing or updating GPU drivers
 
            - Check system logs for GPU-related errors
 
            - Consider fully reinstalling GPU software if problems persist
 
            - If acceleration fails entirely, you can still run in CPU-only mode, though processing will be much
                slower