
Here is the rewritten article:
**New AI Technique Can Reduce Memory Consumption by Up to 75% in Large Language Models**
A recent breakthrough in artificial intelligence (AI) has the potential to revolutionize the way we use language models. Researchers have developed a new technique called Neural Adaptive Memory Module (NAMM), which can significantly reduce memory consumption in large language models (LLMs).
According to the study, NAMM is capable of adapting its behavior to the specific task at hand. This means that it can automatically adjust its memory usage depending on the nature of the task, such as encoding or decoding text.
For example, when it comes to coding tasks, NAMM will discard token sequences that are not relevant to the code’s execution. On the other hand, when dealing with natural language tasks, it will ignore redundant tokens that do not affect the overall meaning of the text.
The implications of this technology are enormous. With NAMM, AI systems can process significantly more data without increasing their memory footprint. This could lead to a major reduction in costs and energy consumption, making AI more accessible and efficient for various applications.
Furthermore, the authors suggest that future research should focus on integrating NAMM into the training of LLMs. By doing so, they believe it is possible to create even more sophisticated models with much larger memory capacities.
The researchers’ ultimate goal is to push the boundaries of what is currently possible in AI development. They emphasize that this technology represents only the beginning of a new era in which AI systems can be optimized for specific tasks and environments.
In their own words, “this work marks the starting point of exploring new possibilities in memory modeling. We believe that these results will open up many new avenues for future research.”
Source: www.bitcoinbazis.hu