Neural Radiance Fields (NeRF) have demonstrated remarkable performance in the field of novel view synthesis (NVS). However, their high computational cost limits practical applicability. The 3D Gaussian Splatting (3DGS) method offers a significant improvement in rendering efficiency, enabling real-time rendering through its explicit representations. Nevertheless, its substantial storage requirements pose challenges for complex scenes and resource-constrained devices. Existing methods aim to achieve storage compression through redundant point pruning, spherical harmonics adjustment, and vector quantization. However, point pruning methods often compromise geometric details in complex structures, while vector quantization approaches fail to capture feature relationships effectively, resulting in texture degradation and geometric boundary blurring. Although anchor point representations partially address storage concerns, their sparse representation limits compression efficiency. These limitations become particularly evident in scenes with intricate textures and complex lighting conditions. To ensure optimal compression ratios while maintaining high fidelity in Gaussian scenarios, this paper proposes an Attention-Aware Adaptive Codebook Gaussian Splatting (AAC-GS) method for efficient storage compression. The approach dynamically adjusts the size of the codebook to optimize storage efficiency and incorporates an attention mechanism to capture feature contextual relationships, thereby enhancing reconstruction quality. Additionally, a Generative Adversarial Network (GAN) is employed to mitigate quantization losses, achieving a balance between compression rate and visual fidelity. Experimental results demonstrate that AAC-GS achieves an average compression ratio of approximately 40× while maintaining high reconstruction quality, showcasing its potential for multi-scene applications.